Test Report: Docker_Linux_crio 22186

                    
                      5e28b85a1d78221970a3d6d4a20cdd5c3710ee83:2025-12-17:42830
                    
                

Test fail (27/415)

x
+
TestAddons/serial/Volcano (0.27s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:852: skipping: crio not supported
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-695107 addons disable volcano --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-695107 addons disable volcano --alsologtostderr -v=1: exit status 11 (269.770939ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1217 19:25:57.488540  385357 out.go:360] Setting OutFile to fd 1 ...
	I1217 19:25:57.488835  385357 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 19:25:57.488846  385357 out.go:374] Setting ErrFile to fd 2...
	I1217 19:25:57.488852  385357 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 19:25:57.489054  385357 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22186-372245/.minikube/bin
	I1217 19:25:57.489398  385357 mustload.go:66] Loading cluster: addons-695107
	I1217 19:25:57.489755  385357 config.go:182] Loaded profile config "addons-695107": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 19:25:57.489774  385357 addons.go:622] checking whether the cluster is paused
	I1217 19:25:57.489874  385357 config.go:182] Loaded profile config "addons-695107": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 19:25:57.489891  385357 host.go:66] Checking if "addons-695107" exists ...
	I1217 19:25:57.490313  385357 cli_runner.go:164] Run: docker container inspect addons-695107 --format={{.State.Status}}
	I1217 19:25:57.509882  385357 ssh_runner.go:195] Run: systemctl --version
	I1217 19:25:57.509940  385357 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-695107
	I1217 19:25:57.530012  385357 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22186-372245/.minikube/machines/addons-695107/id_rsa Username:docker}
	I1217 19:25:57.631539  385357 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1217 19:25:57.631627  385357 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 19:25:57.662683  385357 cri.go:89] found id: "05e7c087fc88a388e9fce4a8fadcd7c6e045c449280b951b0a69fe971518c8e4"
	I1217 19:25:57.662707  385357 cri.go:89] found id: "030ee45fef3825f728fb878da790fd63c6e2d436f0bdee766e3b5c4313ba91b4"
	I1217 19:25:57.662711  385357 cri.go:89] found id: "e582a6b346e424adf2f6c23b450133f4ec35319edb9a095ef63a9da14924bc85"
	I1217 19:25:57.662714  385357 cri.go:89] found id: "6f1389fbed5a8165c3a7308b7768fbefbb05788ef8d898f075f95f6d5c909bde"
	I1217 19:25:57.662717  385357 cri.go:89] found id: "bb406a59b4704de349007327f30e38ffa01008f88e9504149a856dd758cb8314"
	I1217 19:25:57.662720  385357 cri.go:89] found id: "7927a0e1520a196318cf74495ff2fbd014eaec7890e7757b0c005f92944ba5fe"
	I1217 19:25:57.662723  385357 cri.go:89] found id: "4fd8c32f1f75b8dd6f3a5d4c557a48c965bfed2ee319e9ebc07b83a0498e9614"
	I1217 19:25:57.662726  385357 cri.go:89] found id: "3e0c0283ddfb5e25a2829243a99334aba7fddd2a8ed203b36520a310978711ad"
	I1217 19:25:57.662729  385357 cri.go:89] found id: "1309939d3b4dae1d9b8580e1652131608a79d12222165783d82fd3c6844da7d0"
	I1217 19:25:57.662735  385357 cri.go:89] found id: "8f0c2abe1917b2ff3fe742905d3cbd5e0734c50d00b37c3ae2d6bce65a81b1a4"
	I1217 19:25:57.662738  385357 cri.go:89] found id: "801db4b070e91430b722ceab6c3f6ad31c2b3fba0e4ec61f6575746703230db4"
	I1217 19:25:57.662741  385357 cri.go:89] found id: "c7eea19f4d49e38bd7e7f4cb234216d510d8104890af99fc48c47b7bea1c0bdd"
	I1217 19:25:57.662743  385357 cri.go:89] found id: "51a71566b557a3bb8ac4ee375ce62b941752fa12df3a062db96dfcdd7cf90c18"
	I1217 19:25:57.662746  385357 cri.go:89] found id: "a485e9f994ff95a2a7f3857ba3bac5871f37c7f68fe9a7511385fee343147b8b"
	I1217 19:25:57.662749  385357 cri.go:89] found id: "04f733eceac2431078e28d9b6aa0a99e8ae15495d70be998c595825b5d1bf4f8"
	I1217 19:25:57.662764  385357 cri.go:89] found id: "c3f541802ca322bdfefe59f58465e0b5fc47df46f565bbf169fdf155b6520813"
	I1217 19:25:57.662770  385357 cri.go:89] found id: "e3aca076801c71c61c7d166207a81c454eca7b4579247b6da815893233243960"
	I1217 19:25:57.662775  385357 cri.go:89] found id: "f32dab99d943eec56bf9918ed2f6b53e96fd877cfbbf5192cf7d857f1b776f8e"
	I1217 19:25:57.662777  385357 cri.go:89] found id: "b68b1b351d2b0d7d4628fdbe0a6689c4e3150e140e9149ec00e8886c21c85388"
	I1217 19:25:57.662780  385357 cri.go:89] found id: "bc8813162646db6787344c15bb78bf1f1a23063d72326a728b0a42dafc7c4d56"
	I1217 19:25:57.662783  385357 cri.go:89] found id: "bea3125cf2914bd997ad7c9b382bc666af7c3ef97d39311b120cecf6bfd19b22"
	I1217 19:25:57.662785  385357 cri.go:89] found id: "5875440c2f308ff9ae46bdeb21b7960b61f51fff5f745adf6f9deb63f35cfb16"
	I1217 19:25:57.662788  385357 cri.go:89] found id: "87468d7032ea669744a3be9490a79472140a58976b8a3c756b65a43dbda2d50e"
	I1217 19:25:57.662791  385357 cri.go:89] found id: "fd7cf6d64d69e77f0f93c54b2f5c32210f59f02ec07dbd9708e6d7d40d2b4e33"
	I1217 19:25:57.662794  385357 cri.go:89] found id: ""
	I1217 19:25:57.662836  385357 ssh_runner.go:195] Run: sudo runc list -f json
	I1217 19:25:57.678795  385357 out.go:203] 
	W1217 19:25:57.680252  385357 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T19:25:57Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T19:25:57Z" level=error msg="open /run/runc: no such file or directory"
	
	W1217 19:25:57.680273  385357 out.go:285] * 
	* 
	W1217 19:25:57.684299  385357 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1217 19:25:57.685738  385357 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable volcano addon: args "out/minikube-linux-amd64 -p addons-695107 addons disable volcano --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/serial/Volcano (0.27s)

                                                
                                    
x
+
TestAddons/parallel/Registry (13.65s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:384: registry stabilized in 3.243457ms
addons_test.go:386: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:353: "registry-6b586f9694-2jvdr" [d850b7ca-185a-40a6-bd67-035ed864cc70] Running
addons_test.go:386: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.002752654s
addons_test.go:389: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:353: "registry-proxy-8dlbt" [2eed962d-54b9-4a44-a7d8-38bf999b5d29] Running
addons_test.go:389: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.003520794s
addons_test.go:394: (dbg) Run:  kubectl --context addons-695107 delete po -l run=registry-test --now
addons_test.go:399: (dbg) Run:  kubectl --context addons-695107 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:399: (dbg) Done: kubectl --context addons-695107 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (3.132194553s)
addons_test.go:413: (dbg) Run:  out/minikube-linux-amd64 -p addons-695107 ip
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-695107 addons disable registry --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-695107 addons disable registry --alsologtostderr -v=1: exit status 11 (267.442684ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1217 19:26:19.958579  388053 out.go:360] Setting OutFile to fd 1 ...
	I1217 19:26:19.958716  388053 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 19:26:19.958729  388053 out.go:374] Setting ErrFile to fd 2...
	I1217 19:26:19.958733  388053 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 19:26:19.958971  388053 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22186-372245/.minikube/bin
	I1217 19:26:19.959307  388053 mustload.go:66] Loading cluster: addons-695107
	I1217 19:26:19.959708  388053 config.go:182] Loaded profile config "addons-695107": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 19:26:19.959728  388053 addons.go:622] checking whether the cluster is paused
	I1217 19:26:19.959827  388053 config.go:182] Loaded profile config "addons-695107": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 19:26:19.959847  388053 host.go:66] Checking if "addons-695107" exists ...
	I1217 19:26:19.960290  388053 cli_runner.go:164] Run: docker container inspect addons-695107 --format={{.State.Status}}
	I1217 19:26:19.978498  388053 ssh_runner.go:195] Run: systemctl --version
	I1217 19:26:19.978552  388053 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-695107
	I1217 19:26:19.997994  388053 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22186-372245/.minikube/machines/addons-695107/id_rsa Username:docker}
	I1217 19:26:20.099987  388053 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1217 19:26:20.100110  388053 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 19:26:20.131749  388053 cri.go:89] found id: "05e7c087fc88a388e9fce4a8fadcd7c6e045c449280b951b0a69fe971518c8e4"
	I1217 19:26:20.131780  388053 cri.go:89] found id: "030ee45fef3825f728fb878da790fd63c6e2d436f0bdee766e3b5c4313ba91b4"
	I1217 19:26:20.131784  388053 cri.go:89] found id: "e582a6b346e424adf2f6c23b450133f4ec35319edb9a095ef63a9da14924bc85"
	I1217 19:26:20.131788  388053 cri.go:89] found id: "6f1389fbed5a8165c3a7308b7768fbefbb05788ef8d898f075f95f6d5c909bde"
	I1217 19:26:20.131790  388053 cri.go:89] found id: "bb406a59b4704de349007327f30e38ffa01008f88e9504149a856dd758cb8314"
	I1217 19:26:20.131795  388053 cri.go:89] found id: "7927a0e1520a196318cf74495ff2fbd014eaec7890e7757b0c005f92944ba5fe"
	I1217 19:26:20.131798  388053 cri.go:89] found id: "4fd8c32f1f75b8dd6f3a5d4c557a48c965bfed2ee319e9ebc07b83a0498e9614"
	I1217 19:26:20.131800  388053 cri.go:89] found id: "3e0c0283ddfb5e25a2829243a99334aba7fddd2a8ed203b36520a310978711ad"
	I1217 19:26:20.131803  388053 cri.go:89] found id: "1309939d3b4dae1d9b8580e1652131608a79d12222165783d82fd3c6844da7d0"
	I1217 19:26:20.131818  388053 cri.go:89] found id: "8f0c2abe1917b2ff3fe742905d3cbd5e0734c50d00b37c3ae2d6bce65a81b1a4"
	I1217 19:26:20.131821  388053 cri.go:89] found id: "801db4b070e91430b722ceab6c3f6ad31c2b3fba0e4ec61f6575746703230db4"
	I1217 19:26:20.131823  388053 cri.go:89] found id: "c7eea19f4d49e38bd7e7f4cb234216d510d8104890af99fc48c47b7bea1c0bdd"
	I1217 19:26:20.131826  388053 cri.go:89] found id: "51a71566b557a3bb8ac4ee375ce62b941752fa12df3a062db96dfcdd7cf90c18"
	I1217 19:26:20.131828  388053 cri.go:89] found id: "a485e9f994ff95a2a7f3857ba3bac5871f37c7f68fe9a7511385fee343147b8b"
	I1217 19:26:20.131831  388053 cri.go:89] found id: "04f733eceac2431078e28d9b6aa0a99e8ae15495d70be998c595825b5d1bf4f8"
	I1217 19:26:20.131843  388053 cri.go:89] found id: "c3f541802ca322bdfefe59f58465e0b5fc47df46f565bbf169fdf155b6520813"
	I1217 19:26:20.131850  388053 cri.go:89] found id: "e3aca076801c71c61c7d166207a81c454eca7b4579247b6da815893233243960"
	I1217 19:26:20.131855  388053 cri.go:89] found id: "f32dab99d943eec56bf9918ed2f6b53e96fd877cfbbf5192cf7d857f1b776f8e"
	I1217 19:26:20.131857  388053 cri.go:89] found id: "b68b1b351d2b0d7d4628fdbe0a6689c4e3150e140e9149ec00e8886c21c85388"
	I1217 19:26:20.131860  388053 cri.go:89] found id: "bc8813162646db6787344c15bb78bf1f1a23063d72326a728b0a42dafc7c4d56"
	I1217 19:26:20.131866  388053 cri.go:89] found id: "bea3125cf2914bd997ad7c9b382bc666af7c3ef97d39311b120cecf6bfd19b22"
	I1217 19:26:20.131868  388053 cri.go:89] found id: "5875440c2f308ff9ae46bdeb21b7960b61f51fff5f745adf6f9deb63f35cfb16"
	I1217 19:26:20.131871  388053 cri.go:89] found id: "87468d7032ea669744a3be9490a79472140a58976b8a3c756b65a43dbda2d50e"
	I1217 19:26:20.131873  388053 cri.go:89] found id: "fd7cf6d64d69e77f0f93c54b2f5c32210f59f02ec07dbd9708e6d7d40d2b4e33"
	I1217 19:26:20.131876  388053 cri.go:89] found id: ""
	I1217 19:26:20.131940  388053 ssh_runner.go:195] Run: sudo runc list -f json
	I1217 19:26:20.146800  388053 out.go:203] 
	W1217 19:26:20.148166  388053 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T19:26:20Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T19:26:20Z" level=error msg="open /run/runc: no such file or directory"
	
	W1217 19:26:20.148197  388053 out.go:285] * 
	* 
	W1217 19:26:20.152118  388053 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1217 19:26:20.153546  388053 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable registry addon: args "out/minikube-linux-amd64 -p addons-695107 addons disable registry --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Registry (13.65s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.47s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:325: registry-creds stabilized in 3.167065ms
addons_test.go:327: (dbg) Run:  out/minikube-linux-amd64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-695107
addons_test.go:334: (dbg) Run:  kubectl --context addons-695107 -n kube-system get secret -o yaml
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-695107 addons disable registry-creds --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-695107 addons disable registry-creds --alsologtostderr -v=1: exit status 11 (289.80497ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1217 19:26:12.089966  386699 out.go:360] Setting OutFile to fd 1 ...
	I1217 19:26:12.090265  386699 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 19:26:12.090278  386699 out.go:374] Setting ErrFile to fd 2...
	I1217 19:26:12.090285  386699 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 19:26:12.090586  386699 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22186-372245/.minikube/bin
	I1217 19:26:12.090973  386699 mustload.go:66] Loading cluster: addons-695107
	I1217 19:26:12.091441  386699 config.go:182] Loaded profile config "addons-695107": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 19:26:12.091462  386699 addons.go:622] checking whether the cluster is paused
	I1217 19:26:12.091592  386699 config.go:182] Loaded profile config "addons-695107": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 19:26:12.091619  386699 host.go:66] Checking if "addons-695107" exists ...
	I1217 19:26:12.092217  386699 cli_runner.go:164] Run: docker container inspect addons-695107 --format={{.State.Status}}
	I1217 19:26:12.115480  386699 ssh_runner.go:195] Run: systemctl --version
	I1217 19:26:12.115556  386699 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-695107
	I1217 19:26:12.145200  386699 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22186-372245/.minikube/machines/addons-695107/id_rsa Username:docker}
	I1217 19:26:12.254021  386699 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1217 19:26:12.254124  386699 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 19:26:12.288300  386699 cri.go:89] found id: "05e7c087fc88a388e9fce4a8fadcd7c6e045c449280b951b0a69fe971518c8e4"
	I1217 19:26:12.288329  386699 cri.go:89] found id: "030ee45fef3825f728fb878da790fd63c6e2d436f0bdee766e3b5c4313ba91b4"
	I1217 19:26:12.288336  386699 cri.go:89] found id: "e582a6b346e424adf2f6c23b450133f4ec35319edb9a095ef63a9da14924bc85"
	I1217 19:26:12.288341  386699 cri.go:89] found id: "6f1389fbed5a8165c3a7308b7768fbefbb05788ef8d898f075f95f6d5c909bde"
	I1217 19:26:12.288345  386699 cri.go:89] found id: "bb406a59b4704de349007327f30e38ffa01008f88e9504149a856dd758cb8314"
	I1217 19:26:12.288350  386699 cri.go:89] found id: "7927a0e1520a196318cf74495ff2fbd014eaec7890e7757b0c005f92944ba5fe"
	I1217 19:26:12.288353  386699 cri.go:89] found id: "4fd8c32f1f75b8dd6f3a5d4c557a48c965bfed2ee319e9ebc07b83a0498e9614"
	I1217 19:26:12.288357  386699 cri.go:89] found id: "3e0c0283ddfb5e25a2829243a99334aba7fddd2a8ed203b36520a310978711ad"
	I1217 19:26:12.288362  386699 cri.go:89] found id: "1309939d3b4dae1d9b8580e1652131608a79d12222165783d82fd3c6844da7d0"
	I1217 19:26:12.288370  386699 cri.go:89] found id: "8f0c2abe1917b2ff3fe742905d3cbd5e0734c50d00b37c3ae2d6bce65a81b1a4"
	I1217 19:26:12.288374  386699 cri.go:89] found id: "801db4b070e91430b722ceab6c3f6ad31c2b3fba0e4ec61f6575746703230db4"
	I1217 19:26:12.288389  386699 cri.go:89] found id: "c7eea19f4d49e38bd7e7f4cb234216d510d8104890af99fc48c47b7bea1c0bdd"
	I1217 19:26:12.288393  386699 cri.go:89] found id: "51a71566b557a3bb8ac4ee375ce62b941752fa12df3a062db96dfcdd7cf90c18"
	I1217 19:26:12.288398  386699 cri.go:89] found id: "a485e9f994ff95a2a7f3857ba3bac5871f37c7f68fe9a7511385fee343147b8b"
	I1217 19:26:12.288406  386699 cri.go:89] found id: "04f733eceac2431078e28d9b6aa0a99e8ae15495d70be998c595825b5d1bf4f8"
	I1217 19:26:12.288419  386699 cri.go:89] found id: "c3f541802ca322bdfefe59f58465e0b5fc47df46f565bbf169fdf155b6520813"
	I1217 19:26:12.288428  386699 cri.go:89] found id: "e3aca076801c71c61c7d166207a81c454eca7b4579247b6da815893233243960"
	I1217 19:26:12.288434  386699 cri.go:89] found id: "f32dab99d943eec56bf9918ed2f6b53e96fd877cfbbf5192cf7d857f1b776f8e"
	I1217 19:26:12.288439  386699 cri.go:89] found id: "b68b1b351d2b0d7d4628fdbe0a6689c4e3150e140e9149ec00e8886c21c85388"
	I1217 19:26:12.288444  386699 cri.go:89] found id: "bc8813162646db6787344c15bb78bf1f1a23063d72326a728b0a42dafc7c4d56"
	I1217 19:26:12.288448  386699 cri.go:89] found id: "bea3125cf2914bd997ad7c9b382bc666af7c3ef97d39311b120cecf6bfd19b22"
	I1217 19:26:12.288453  386699 cri.go:89] found id: "5875440c2f308ff9ae46bdeb21b7960b61f51fff5f745adf6f9deb63f35cfb16"
	I1217 19:26:12.288461  386699 cri.go:89] found id: "87468d7032ea669744a3be9490a79472140a58976b8a3c756b65a43dbda2d50e"
	I1217 19:26:12.288465  386699 cri.go:89] found id: "fd7cf6d64d69e77f0f93c54b2f5c32210f59f02ec07dbd9708e6d7d40d2b4e33"
	I1217 19:26:12.288474  386699 cri.go:89] found id: ""
	I1217 19:26:12.288525  386699 ssh_runner.go:195] Run: sudo runc list -f json
	I1217 19:26:12.302878  386699 out.go:203] 
	W1217 19:26:12.304198  386699 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T19:26:12Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T19:26:12Z" level=error msg="open /run/runc: no such file or directory"
	
	W1217 19:26:12.304222  386699 out.go:285] * 
	* 
	W1217 19:26:12.308069  386699 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1217 19:26:12.309626  386699 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable registry-creds addon: args "out/minikube-linux-amd64 -p addons-695107 addons disable registry-creds --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/RegistryCreds (0.47s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (146.01s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:211: (dbg) Run:  kubectl --context addons-695107 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:236: (dbg) Run:  kubectl --context addons-695107 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:249: (dbg) Run:  kubectl --context addons-695107 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:254: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:353: "nginx" [4e9cb94b-f2f5-44ee-aca5-ab47aaec0103] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:353: "nginx" [4e9cb94b-f2f5-44ee-aca5-ab47aaec0103] Running
addons_test.go:254: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 9.003920812s
I1217 19:26:21.275054  375797 kapi.go:150] Service nginx in namespace default found.
addons_test.go:266: (dbg) Run:  out/minikube-linux-amd64 -p addons-695107 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:266: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-695107 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m13.462363488s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:282: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:290: (dbg) Run:  kubectl --context addons-695107 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:295: (dbg) Run:  out/minikube-linux-amd64 -p addons-695107 ip
addons_test.go:301: (dbg) Run:  nslookup hello-john.test 192.168.49.2
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestAddons/parallel/Ingress]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect addons-695107
helpers_test.go:244: (dbg) docker inspect addons-695107:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "a25be454b6b6755f669ac6ad734c4c39a3256155d18fbf1593189c0c5d90760b",
	        "Created": "2025-12-17T19:24:47.200826359Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 378208,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-17T19:24:47.241881892Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:e3abeb065413b7566dd42e98e204ab3ad174790743f1f5cd427036c11b49d7f1",
	        "ResolvConfPath": "/var/lib/docker/containers/a25be454b6b6755f669ac6ad734c4c39a3256155d18fbf1593189c0c5d90760b/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/a25be454b6b6755f669ac6ad734c4c39a3256155d18fbf1593189c0c5d90760b/hostname",
	        "HostsPath": "/var/lib/docker/containers/a25be454b6b6755f669ac6ad734c4c39a3256155d18fbf1593189c0c5d90760b/hosts",
	        "LogPath": "/var/lib/docker/containers/a25be454b6b6755f669ac6ad734c4c39a3256155d18fbf1593189c0c5d90760b/a25be454b6b6755f669ac6ad734c4c39a3256155d18fbf1593189c0c5d90760b-json.log",
	        "Name": "/addons-695107",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-695107:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-695107",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "a25be454b6b6755f669ac6ad734c4c39a3256155d18fbf1593189c0c5d90760b",
	                "LowerDir": "/var/lib/docker/overlay2/e00afe5cfccc6e8f90fd059d2fba050a5df4e4f0d2ecce470a37146e2175366f-init/diff:/var/lib/docker/overlay2/29727d664a8119dcd8d22d923cfdfa7d86f99088879bf2a113d907b51116eb38/diff",
	                "MergedDir": "/var/lib/docker/overlay2/e00afe5cfccc6e8f90fd059d2fba050a5df4e4f0d2ecce470a37146e2175366f/merged",
	                "UpperDir": "/var/lib/docker/overlay2/e00afe5cfccc6e8f90fd059d2fba050a5df4e4f0d2ecce470a37146e2175366f/diff",
	                "WorkDir": "/var/lib/docker/overlay2/e00afe5cfccc6e8f90fd059d2fba050a5df4e4f0d2ecce470a37146e2175366f/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-695107",
	                "Source": "/var/lib/docker/volumes/addons-695107/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-695107",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-695107",
	                "name.minikube.sigs.k8s.io": "addons-695107",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "700334be7342dcd9e2d5ec85ed0e268a4b88bcf2909b690c046ce972efebad24",
	            "SandboxKey": "/var/run/docker/netns/700334be7342",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33143"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33144"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33147"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33145"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33146"
	                    }
	                ]
	            },
	            "Networks": {
	                "addons-695107": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "86529ba95000ece5f19f992e0cff5b1ae18c2ea573e6a29bf2ac9f27693ae01b",
	                    "EndpointID": "4be3ee9ef435aa25872287041841053f86551477b6e548b82bbea555d3fab478",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "MacAddress": "be:38:90:54:d8:d8",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-695107",
	                        "a25be454b6b6"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-695107 -n addons-695107
helpers_test.go:253: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p addons-695107 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p addons-695107 logs -n 25: (1.171548854s)
helpers_test.go:261: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ --download-only -p binary-mirror-277393 --alsologtostderr --binary-mirror http://127.0.0.1:41979 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-277393 │ jenkins │ v1.37.0 │ 17 Dec 25 19:24 UTC │                     │
	│ delete  │ -p binary-mirror-277393                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-277393 │ jenkins │ v1.37.0 │ 17 Dec 25 19:24 UTC │ 17 Dec 25 19:24 UTC │
	│ addons  │ enable dashboard -p addons-695107                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-695107        │ jenkins │ v1.37.0 │ 17 Dec 25 19:24 UTC │                     │
	│ addons  │ disable dashboard -p addons-695107                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-695107        │ jenkins │ v1.37.0 │ 17 Dec 25 19:24 UTC │                     │
	│ start   │ -p addons-695107 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-695107        │ jenkins │ v1.37.0 │ 17 Dec 25 19:24 UTC │ 17 Dec 25 19:25 UTC │
	│ addons  │ addons-695107 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-695107        │ jenkins │ v1.37.0 │ 17 Dec 25 19:25 UTC │                     │
	│ addons  │ addons-695107 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-695107        │ jenkins │ v1.37.0 │ 17 Dec 25 19:26 UTC │                     │
	│ addons  │ enable headlamp -p addons-695107 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-695107        │ jenkins │ v1.37.0 │ 17 Dec 25 19:26 UTC │                     │
	│ addons  │ addons-695107 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-695107        │ jenkins │ v1.37.0 │ 17 Dec 25 19:26 UTC │                     │
	│ addons  │ addons-695107 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-695107        │ jenkins │ v1.37.0 │ 17 Dec 25 19:26 UTC │                     │
	│ addons  │ addons-695107 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-695107        │ jenkins │ v1.37.0 │ 17 Dec 25 19:26 UTC │                     │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-695107                                                                                                                                                                                                                                                                                                                                                                                           │ addons-695107        │ jenkins │ v1.37.0 │ 17 Dec 25 19:26 UTC │ 17 Dec 25 19:26 UTC │
	│ addons  │ addons-695107 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-695107        │ jenkins │ v1.37.0 │ 17 Dec 25 19:26 UTC │                     │
	│ addons  │ addons-695107 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-695107        │ jenkins │ v1.37.0 │ 17 Dec 25 19:26 UTC │                     │
	│ addons  │ addons-695107 addons disable amd-gpu-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                │ addons-695107        │ jenkins │ v1.37.0 │ 17 Dec 25 19:26 UTC │                     │
	│ addons  │ addons-695107 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-695107        │ jenkins │ v1.37.0 │ 17 Dec 25 19:26 UTC │                     │
	│ ip      │ addons-695107 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-695107        │ jenkins │ v1.37.0 │ 17 Dec 25 19:26 UTC │ 17 Dec 25 19:26 UTC │
	│ addons  │ addons-695107 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-695107        │ jenkins │ v1.37.0 │ 17 Dec 25 19:26 UTC │                     │
	│ ssh     │ addons-695107 ssh curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-695107        │ jenkins │ v1.37.0 │ 17 Dec 25 19:26 UTC │                     │
	│ addons  │ addons-695107 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-695107        │ jenkins │ v1.37.0 │ 17 Dec 25 19:26 UTC │                     │
	│ ssh     │ addons-695107 ssh cat /opt/local-path-provisioner/pvc-53e85d6e-9bfa-403c-aeb8-846b9e87923f_default_test-pvc/file1                                                                                                                                                                                                                                                                                                                                                        │ addons-695107        │ jenkins │ v1.37.0 │ 17 Dec 25 19:26 UTC │ 17 Dec 25 19:26 UTC │
	│ addons  │ addons-695107 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                          │ addons-695107        │ jenkins │ v1.37.0 │ 17 Dec 25 19:26 UTC │                     │
	│ addons  │ addons-695107 addons disable volumesnapshots --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-695107        │ jenkins │ v1.37.0 │ 17 Dec 25 19:26 UTC │                     │
	│ addons  │ addons-695107 addons disable csi-hostpath-driver --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-695107        │ jenkins │ v1.37.0 │ 17 Dec 25 19:26 UTC │                     │
	│ ip      │ addons-695107 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-695107        │ jenkins │ v1.37.0 │ 17 Dec 25 19:28 UTC │ 17 Dec 25 19:28 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/17 19:24:24
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1217 19:24:24.126785  377556 out.go:360] Setting OutFile to fd 1 ...
	I1217 19:24:24.126878  377556 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 19:24:24.126883  377556 out.go:374] Setting ErrFile to fd 2...
	I1217 19:24:24.126887  377556 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 19:24:24.127086  377556 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22186-372245/.minikube/bin
	I1217 19:24:24.127642  377556 out.go:368] Setting JSON to false
	I1217 19:24:24.128538  377556 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":4015,"bootTime":1765995449,"procs":201,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1217 19:24:24.128603  377556 start.go:143] virtualization: kvm guest
	I1217 19:24:24.130321  377556 out.go:179] * [addons-695107] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1217 19:24:24.131751  377556 notify.go:221] Checking for updates...
	I1217 19:24:24.131762  377556 out.go:179]   - MINIKUBE_LOCATION=22186
	I1217 19:24:24.133167  377556 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 19:24:24.134423  377556 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22186-372245/kubeconfig
	I1217 19:24:24.135417  377556 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22186-372245/.minikube
	I1217 19:24:24.136394  377556 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1217 19:24:24.137281  377556 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 19:24:24.138442  377556 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 19:24:24.161791  377556 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1217 19:24:24.161958  377556 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 19:24:24.215528  377556 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:49 SystemTime:2025-12-17 19:24:24.20631484 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1217 19:24:24.215629  377556 docker.go:319] overlay module found
	I1217 19:24:24.217285  377556 out.go:179] * Using the docker driver based on user configuration
	I1217 19:24:24.218407  377556 start.go:309] selected driver: docker
	I1217 19:24:24.218424  377556 start.go:927] validating driver "docker" against <nil>
	I1217 19:24:24.218438  377556 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 19:24:24.219003  377556 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 19:24:24.275175  377556 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:49 SystemTime:2025-12-17 19:24:24.265693812 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1217 19:24:24.275377  377556 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1217 19:24:24.275585  377556 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1217 19:24:24.277232  377556 out.go:179] * Using Docker driver with root privileges
	I1217 19:24:24.278459  377556 cni.go:84] Creating CNI manager for ""
	I1217 19:24:24.278522  377556 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1217 19:24:24.278534  377556 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1217 19:24:24.278611  377556 start.go:353] cluster config:
	{Name:addons-695107 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:addons-695107 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I1217 19:24:24.279911  377556 out.go:179] * Starting "addons-695107" primary control-plane node in "addons-695107" cluster
	I1217 19:24:24.281018  377556 cache.go:134] Beginning downloading kic base image for docker with crio
	I1217 19:24:24.282282  377556 out.go:179] * Pulling base image v0.0.48-1765966054-22186 ...
	I1217 19:24:24.283468  377556 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1217 19:24:24.283503  377556 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22186-372245/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4
	I1217 19:24:24.283511  377556 cache.go:65] Caching tarball of preloaded images
	I1217 19:24:24.283554  377556 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 in local docker daemon
	I1217 19:24:24.283608  377556 preload.go:238] Found /home/jenkins/minikube-integration/22186-372245/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1217 19:24:24.283620  377556 cache.go:68] Finished verifying existence of preloaded tar for v1.34.3 on crio
	I1217 19:24:24.284034  377556 profile.go:143] Saving config to /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/addons-695107/config.json ...
	I1217 19:24:24.284064  377556 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/addons-695107/config.json: {Name:mka6729ae10fb93e1afc67a6d287fd4103077927 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 19:24:24.300139  377556 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 to local cache
	I1217 19:24:24.300290  377556 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 in local cache directory
	I1217 19:24:24.300311  377556 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 in local cache directory, skipping pull
	I1217 19:24:24.300321  377556 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 exists in cache, skipping pull
	I1217 19:24:24.300328  377556 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 as a tarball
	I1217 19:24:24.300335  377556 cache.go:176] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 from local cache
	I1217 19:24:37.335348  377556 cache.go:178] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 from cached tarball
	I1217 19:24:37.335399  377556 cache.go:243] Successfully downloaded all kic artifacts
	I1217 19:24:37.335462  377556 start.go:360] acquireMachinesLock for addons-695107: {Name:mkaa3d9b802c6da07df7c3f5fae85058f2767d38 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 19:24:37.335619  377556 start.go:364] duration metric: took 127.98µs to acquireMachinesLock for "addons-695107"
	I1217 19:24:37.335678  377556 start.go:93] Provisioning new machine with config: &{Name:addons-695107 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:addons-695107 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1217 19:24:37.335765  377556 start.go:125] createHost starting for "" (driver="docker")
	I1217 19:24:37.338497  377556 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1217 19:24:37.338809  377556 start.go:159] libmachine.API.Create for "addons-695107" (driver="docker")
	I1217 19:24:37.338854  377556 client.go:173] LocalClient.Create starting
	I1217 19:24:37.338973  377556 main.go:143] libmachine: Creating CA: /home/jenkins/minikube-integration/22186-372245/.minikube/certs/ca.pem
	I1217 19:24:37.430093  377556 main.go:143] libmachine: Creating client certificate: /home/jenkins/minikube-integration/22186-372245/.minikube/certs/cert.pem
	I1217 19:24:37.483306  377556 cli_runner.go:164] Run: docker network inspect addons-695107 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1217 19:24:37.501673  377556 cli_runner.go:211] docker network inspect addons-695107 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1217 19:24:37.501781  377556 network_create.go:284] running [docker network inspect addons-695107] to gather additional debugging logs...
	I1217 19:24:37.501808  377556 cli_runner.go:164] Run: docker network inspect addons-695107
	W1217 19:24:37.519300  377556 cli_runner.go:211] docker network inspect addons-695107 returned with exit code 1
	I1217 19:24:37.519346  377556 network_create.go:287] error running [docker network inspect addons-695107]: docker network inspect addons-695107: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-695107 not found
	I1217 19:24:37.519368  377556 network_create.go:289] output of [docker network inspect addons-695107]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-695107 not found
	
	** /stderr **
	I1217 19:24:37.519506  377556 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1217 19:24:37.537532  377556 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001f964b0}
	I1217 19:24:37.537575  377556 network_create.go:124] attempt to create docker network addons-695107 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1217 19:24:37.537631  377556 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-695107 addons-695107
	I1217 19:24:37.586320  377556 network_create.go:108] docker network addons-695107 192.168.49.0/24 created
	I1217 19:24:37.586356  377556 kic.go:121] calculated static IP "192.168.49.2" for the "addons-695107" container
	I1217 19:24:37.586437  377556 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1217 19:24:37.603910  377556 cli_runner.go:164] Run: docker volume create addons-695107 --label name.minikube.sigs.k8s.io=addons-695107 --label created_by.minikube.sigs.k8s.io=true
	I1217 19:24:37.622670  377556 oci.go:103] Successfully created a docker volume addons-695107
	I1217 19:24:37.622749  377556 cli_runner.go:164] Run: docker run --rm --name addons-695107-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-695107 --entrypoint /usr/bin/test -v addons-695107:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 -d /var/lib
	I1217 19:24:43.305993  377556 cli_runner.go:217] Completed: docker run --rm --name addons-695107-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-695107 --entrypoint /usr/bin/test -v addons-695107:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 -d /var/lib: (5.683199737s)
	I1217 19:24:43.306025  377556 oci.go:107] Successfully prepared a docker volume addons-695107
	I1217 19:24:43.306100  377556 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1217 19:24:43.306117  377556 kic.go:194] Starting extracting preloaded images to volume ...
	I1217 19:24:43.306215  377556 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22186-372245/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-695107:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 -I lz4 -xf /preloaded.tar -C /extractDir
	I1217 19:24:47.128824  377556 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22186-372245/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-695107:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 -I lz4 -xf /preloaded.tar -C /extractDir: (3.822557769s)
	I1217 19:24:47.128873  377556 kic.go:203] duration metric: took 3.822753031s to extract preloaded images to volume ...
	W1217 19:24:47.128958  377556 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1217 19:24:47.128998  377556 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1217 19:24:47.129038  377556 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1217 19:24:47.184052  377556 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-695107 --name addons-695107 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-695107 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-695107 --network addons-695107 --ip 192.168.49.2 --volume addons-695107:/var --security-opt apparmor=unconfined --memory=4096mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0
	I1217 19:24:47.462500  377556 cli_runner.go:164] Run: docker container inspect addons-695107 --format={{.State.Running}}
	I1217 19:24:47.481698  377556 cli_runner.go:164] Run: docker container inspect addons-695107 --format={{.State.Status}}
	I1217 19:24:47.499771  377556 cli_runner.go:164] Run: docker exec addons-695107 stat /var/lib/dpkg/alternatives/iptables
	I1217 19:24:47.547992  377556 oci.go:144] the created container "addons-695107" has a running status.
	I1217 19:24:47.548034  377556 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22186-372245/.minikube/machines/addons-695107/id_rsa...
	I1217 19:24:47.722949  377556 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22186-372245/.minikube/machines/addons-695107/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1217 19:24:47.749200  377556 cli_runner.go:164] Run: docker container inspect addons-695107 --format={{.State.Status}}
	I1217 19:24:47.777795  377556 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1217 19:24:47.777846  377556 kic_runner.go:114] Args: [docker exec --privileged addons-695107 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1217 19:24:47.829734  377556 cli_runner.go:164] Run: docker container inspect addons-695107 --format={{.State.Status}}
	I1217 19:24:47.851229  377556 machine.go:94] provisionDockerMachine start ...
	I1217 19:24:47.851327  377556 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-695107
	I1217 19:24:47.872171  377556 main.go:143] libmachine: Using SSH client type: native
	I1217 19:24:47.872477  377556 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33143 <nil> <nil>}
	I1217 19:24:47.872499  377556 main.go:143] libmachine: About to run SSH command:
	hostname
	I1217 19:24:48.017223  377556 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-695107
	
	I1217 19:24:48.017261  377556 ubuntu.go:182] provisioning hostname "addons-695107"
	I1217 19:24:48.017333  377556 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-695107
	I1217 19:24:48.036925  377556 main.go:143] libmachine: Using SSH client type: native
	I1217 19:24:48.037410  377556 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33143 <nil> <nil>}
	I1217 19:24:48.037440  377556 main.go:143] libmachine: About to run SSH command:
	sudo hostname addons-695107 && echo "addons-695107" | sudo tee /etc/hostname
	I1217 19:24:48.194844  377556 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-695107
	
	I1217 19:24:48.194956  377556 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-695107
	I1217 19:24:48.213589  377556 main.go:143] libmachine: Using SSH client type: native
	I1217 19:24:48.213961  377556 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33143 <nil> <nil>}
	I1217 19:24:48.213990  377556 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-695107' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-695107/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-695107' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1217 19:24:48.359274  377556 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1217 19:24:48.359308  377556 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22186-372245/.minikube CaCertPath:/home/jenkins/minikube-integration/22186-372245/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22186-372245/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22186-372245/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22186-372245/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22186-372245/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22186-372245/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22186-372245/.minikube}
	I1217 19:24:48.359364  377556 ubuntu.go:190] setting up certificates
	I1217 19:24:48.359382  377556 provision.go:84] configureAuth start
	I1217 19:24:48.359447  377556 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-695107
	I1217 19:24:48.378364  377556 provision.go:143] copyHostCerts
	I1217 19:24:48.378440  377556 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22186-372245/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22186-372245/.minikube/ca.pem (1082 bytes)
	I1217 19:24:48.378618  377556 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22186-372245/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22186-372245/.minikube/cert.pem (1123 bytes)
	I1217 19:24:48.378698  377556 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22186-372245/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22186-372245/.minikube/key.pem (1675 bytes)
	I1217 19:24:48.378764  377556 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22186-372245/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22186-372245/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22186-372245/.minikube/certs/ca-key.pem org=jenkins.addons-695107 san=[127.0.0.1 192.168.49.2 addons-695107 localhost minikube]
	I1217 19:24:48.420807  377556 provision.go:177] copyRemoteCerts
	I1217 19:24:48.420872  377556 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1217 19:24:48.420918  377556 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-695107
	I1217 19:24:48.439742  377556 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22186-372245/.minikube/machines/addons-695107/id_rsa Username:docker}
	I1217 19:24:48.542652  377556 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1217 19:24:48.562286  377556 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1217 19:24:48.579495  377556 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1217 19:24:48.596512  377556 provision.go:87] duration metric: took 237.115873ms to configureAuth
	I1217 19:24:48.596545  377556 ubuntu.go:206] setting minikube options for container-runtime
	I1217 19:24:48.596737  377556 config.go:182] Loaded profile config "addons-695107": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 19:24:48.596857  377556 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-695107
	I1217 19:24:48.615680  377556 main.go:143] libmachine: Using SSH client type: native
	I1217 19:24:48.615923  377556 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33143 <nil> <nil>}
	I1217 19:24:48.615946  377556 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1217 19:24:48.908292  377556 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1217 19:24:48.908326  377556 machine.go:97] duration metric: took 1.057069076s to provisionDockerMachine
	I1217 19:24:48.908342  377556 client.go:176] duration metric: took 11.56947608s to LocalClient.Create
	I1217 19:24:48.908367  377556 start.go:167] duration metric: took 11.569560109s to libmachine.API.Create "addons-695107"
	I1217 19:24:48.908378  377556 start.go:293] postStartSetup for "addons-695107" (driver="docker")
	I1217 19:24:48.908398  377556 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1217 19:24:48.908491  377556 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1217 19:24:48.908544  377556 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-695107
	I1217 19:24:48.927576  377556 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22186-372245/.minikube/machines/addons-695107/id_rsa Username:docker}
	I1217 19:24:49.031768  377556 ssh_runner.go:195] Run: cat /etc/os-release
	I1217 19:24:49.035634  377556 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1217 19:24:49.035665  377556 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1217 19:24:49.035683  377556 filesync.go:126] Scanning /home/jenkins/minikube-integration/22186-372245/.minikube/addons for local assets ...
	I1217 19:24:49.035765  377556 filesync.go:126] Scanning /home/jenkins/minikube-integration/22186-372245/.minikube/files for local assets ...
	I1217 19:24:49.035805  377556 start.go:296] duration metric: took 127.418735ms for postStartSetup
	I1217 19:24:49.036223  377556 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-695107
	I1217 19:24:49.054364  377556 profile.go:143] Saving config to /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/addons-695107/config.json ...
	I1217 19:24:49.054674  377556 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 19:24:49.054736  377556 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-695107
	I1217 19:24:49.074658  377556 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22186-372245/.minikube/machines/addons-695107/id_rsa Username:docker}
	I1217 19:24:49.173512  377556 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1217 19:24:49.178254  377556 start.go:128] duration metric: took 11.842472013s to createHost
	I1217 19:24:49.178279  377556 start.go:83] releasing machines lock for "addons-695107", held for 11.84264303s
	I1217 19:24:49.178344  377556 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-695107
	I1217 19:24:49.196721  377556 ssh_runner.go:195] Run: cat /version.json
	I1217 19:24:49.196792  377556 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-695107
	I1217 19:24:49.196800  377556 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1217 19:24:49.196933  377556 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-695107
	I1217 19:24:49.215979  377556 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22186-372245/.minikube/machines/addons-695107/id_rsa Username:docker}
	I1217 19:24:49.216293  377556 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22186-372245/.minikube/machines/addons-695107/id_rsa Username:docker}
	I1217 19:24:49.367111  377556 ssh_runner.go:195] Run: systemctl --version
	I1217 19:24:49.373648  377556 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1217 19:24:49.409219  377556 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1217 19:24:49.414155  377556 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1217 19:24:49.414231  377556 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1217 19:24:49.441204  377556 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1217 19:24:49.441239  377556 start.go:496] detecting cgroup driver to use...
	I1217 19:24:49.441282  377556 detect.go:190] detected "systemd" cgroup driver on host os
	I1217 19:24:49.441336  377556 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1217 19:24:49.458347  377556 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1217 19:24:49.471281  377556 docker.go:218] disabling cri-docker service (if available) ...
	I1217 19:24:49.471335  377556 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1217 19:24:49.488924  377556 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1217 19:24:49.507099  377556 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1217 19:24:49.589374  377556 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1217 19:24:49.676445  377556 docker.go:234] disabling docker service ...
	I1217 19:24:49.676509  377556 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1217 19:24:49.695760  377556 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1217 19:24:49.708957  377556 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1217 19:24:49.795271  377556 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1217 19:24:49.878164  377556 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1217 19:24:49.890692  377556 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1217 19:24:49.904628  377556 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1217 19:24:49.904679  377556 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 19:24:49.914534  377556 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1217 19:24:49.914601  377556 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 19:24:49.923632  377556 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 19:24:49.932217  377556 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 19:24:49.940812  377556 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1217 19:24:49.948972  377556 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 19:24:49.958499  377556 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 19:24:49.972771  377556 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 19:24:49.981741  377556 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1217 19:24:49.989429  377556 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1217 19:24:49.997504  377556 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 19:24:50.075968  377556 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1217 19:24:50.211126  377556 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1217 19:24:50.211227  377556 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1217 19:24:50.215194  377556 start.go:564] Will wait 60s for crictl version
	I1217 19:24:50.215247  377556 ssh_runner.go:195] Run: which crictl
	I1217 19:24:50.218819  377556 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1217 19:24:50.244459  377556 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1217 19:24:50.244538  377556 ssh_runner.go:195] Run: crio --version
	I1217 19:24:50.273030  377556 ssh_runner.go:195] Run: crio --version
	I1217 19:24:50.306352  377556 out.go:179] * Preparing Kubernetes v1.34.3 on CRI-O 1.34.3 ...
	I1217 19:24:50.308153  377556 cli_runner.go:164] Run: docker network inspect addons-695107 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1217 19:24:50.325215  377556 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1217 19:24:50.329413  377556 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 19:24:50.339820  377556 kubeadm.go:884] updating cluster {Name:addons-695107 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:addons-695107 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1217 19:24:50.339961  377556 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1217 19:24:50.340021  377556 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 19:24:50.370798  377556 crio.go:514] all images are preloaded for cri-o runtime.
	I1217 19:24:50.370820  377556 crio.go:433] Images already preloaded, skipping extraction
	I1217 19:24:50.370866  377556 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 19:24:50.398361  377556 crio.go:514] all images are preloaded for cri-o runtime.
	I1217 19:24:50.398386  377556 cache_images.go:86] Images are preloaded, skipping loading
	I1217 19:24:50.398394  377556 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.34.3 crio true true} ...
	I1217 19:24:50.398506  377556 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-695107 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.3 ClusterName:addons-695107 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1217 19:24:50.398589  377556 ssh_runner.go:195] Run: crio config
	I1217 19:24:50.445810  377556 cni.go:84] Creating CNI manager for ""
	I1217 19:24:50.445834  377556 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1217 19:24:50.445851  377556 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1217 19:24:50.445880  377556 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-695107 NodeName:addons-695107 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernet
es/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1217 19:24:50.446028  377556 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-695107"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1217 19:24:50.446119  377556 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.3
	I1217 19:24:50.454668  377556 binaries.go:51] Found k8s binaries, skipping transfer
	I1217 19:24:50.454757  377556 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1217 19:24:50.462996  377556 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1217 19:24:50.475974  377556 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1217 19:24:50.491924  377556 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1217 19:24:50.505161  377556 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1217 19:24:50.508992  377556 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 19:24:50.519852  377556 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 19:24:50.601686  377556 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 19:24:50.625594  377556 certs.go:69] Setting up /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/addons-695107 for IP: 192.168.49.2
	I1217 19:24:50.625622  377556 certs.go:195] generating shared ca certs ...
	I1217 19:24:50.625645  377556 certs.go:227] acquiring lock for ca certs: {Name:mk6c0a4a99609de13fb0b54aca94f9165cc7856c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 19:24:50.625813  377556 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/22186-372245/.minikube/ca.key
	I1217 19:24:50.784108  377556 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22186-372245/.minikube/ca.crt ...
	I1217 19:24:50.784153  377556 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-372245/.minikube/ca.crt: {Name:mka8faad6b0d9cfe9eff735b660a85cc4b3def2f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 19:24:50.784356  377556 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22186-372245/.minikube/ca.key ...
	I1217 19:24:50.784368  377556 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-372245/.minikube/ca.key: {Name:mk1599aec95e8473475cf64374004073927776cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 19:24:50.784457  377556 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22186-372245/.minikube/proxy-client-ca.key
	I1217 19:24:50.814125  377556 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22186-372245/.minikube/proxy-client-ca.crt ...
	I1217 19:24:50.814182  377556 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-372245/.minikube/proxy-client-ca.crt: {Name:mk756fb6e2f220465394bbd8d88a3fc31836c1bb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 19:24:50.814378  377556 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22186-372245/.minikube/proxy-client-ca.key ...
	I1217 19:24:50.814391  377556 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-372245/.minikube/proxy-client-ca.key: {Name:mk271354f73027bd48ba21a5a5e9a21db166cab2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 19:24:50.814495  377556 certs.go:257] generating profile certs ...
	I1217 19:24:50.814563  377556 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/addons-695107/client.key
	I1217 19:24:50.814579  377556 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/addons-695107/client.crt with IP's: []
	I1217 19:24:50.879385  377556 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/addons-695107/client.crt ...
	I1217 19:24:50.879427  377556 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/addons-695107/client.crt: {Name:mkca787255fc48452b56c2a6c08bfd95dd7307db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 19:24:50.879626  377556 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/addons-695107/client.key ...
	I1217 19:24:50.879643  377556 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/addons-695107/client.key: {Name:mk3392fc258ab3f5eb01658f05c7245392cb66a8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 19:24:50.879720  377556 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/addons-695107/apiserver.key.6c8e3526
	I1217 19:24:50.879742  377556 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/addons-695107/apiserver.crt.6c8e3526 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1217 19:24:50.974365  377556 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/addons-695107/apiserver.crt.6c8e3526 ...
	I1217 19:24:50.974403  377556 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/addons-695107/apiserver.crt.6c8e3526: {Name:mke9c0d6fff2cdc1fc4c7f9a670a76f1aa124df8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 19:24:50.974592  377556 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/addons-695107/apiserver.key.6c8e3526 ...
	I1217 19:24:50.974607  377556 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/addons-695107/apiserver.key.6c8e3526: {Name:mkb5d41c2e17e562ad6a3d630d01716c086df6c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 19:24:50.974688  377556 certs.go:382] copying /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/addons-695107/apiserver.crt.6c8e3526 -> /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/addons-695107/apiserver.crt
	I1217 19:24:50.974804  377556 certs.go:386] copying /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/addons-695107/apiserver.key.6c8e3526 -> /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/addons-695107/apiserver.key
	I1217 19:24:50.974873  377556 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/addons-695107/proxy-client.key
	I1217 19:24:50.974896  377556 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/addons-695107/proxy-client.crt with IP's: []
	I1217 19:24:51.002226  377556 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/addons-695107/proxy-client.crt ...
	I1217 19:24:51.002264  377556 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/addons-695107/proxy-client.crt: {Name:mk5b0366c4e469d7eeda8c677bd7e7fe88fcde19 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 19:24:51.002454  377556 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/addons-695107/proxy-client.key ...
	I1217 19:24:51.002469  377556 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/addons-695107/proxy-client.key: {Name:mkae7871a9cb882df4155f0d4ec3bef895fd8530 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 19:24:51.002662  377556 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-372245/.minikube/certs/ca-key.pem (1675 bytes)
	I1217 19:24:51.002702  377556 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-372245/.minikube/certs/ca.pem (1082 bytes)
	I1217 19:24:51.002735  377556 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-372245/.minikube/certs/cert.pem (1123 bytes)
	I1217 19:24:51.002774  377556 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-372245/.minikube/certs/key.pem (1675 bytes)
	I1217 19:24:51.003524  377556 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1217 19:24:51.022376  377556 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1217 19:24:51.041251  377556 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1217 19:24:51.059509  377556 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1217 19:24:51.077617  377556 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/addons-695107/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1217 19:24:51.095597  377556 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/addons-695107/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1217 19:24:51.112586  377556 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/addons-695107/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1217 19:24:51.130434  377556 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/addons-695107/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1217 19:24:51.148409  377556 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 19:24:51.168994  377556 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1217 19:24:51.182483  377556 ssh_runner.go:195] Run: openssl version
	I1217 19:24:51.188793  377556 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1217 19:24:51.196803  377556 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1217 19:24:51.207139  377556 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1217 19:24:51.210920  377556 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 19:24 /usr/share/ca-certificates/minikubeCA.pem
	I1217 19:24:51.210977  377556 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1217 19:24:51.245501  377556 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1217 19:24:51.253723  377556 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1217 19:24:51.261603  377556 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1217 19:24:51.265182  377556 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1217 19:24:51.265247  377556 kubeadm.go:401] StartCluster: {Name:addons-695107 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:addons-695107 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 19:24:51.265390  377556 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1217 19:24:51.265452  377556 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 19:24:51.293126  377556 cri.go:89] found id: ""
	I1217 19:24:51.293200  377556 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1217 19:24:51.301544  377556 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1217 19:24:51.309824  377556 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1217 19:24:51.309898  377556 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1217 19:24:51.318727  377556 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1217 19:24:51.318747  377556 kubeadm.go:158] found existing configuration files:
	
	I1217 19:24:51.318789  377556 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1217 19:24:51.326961  377556 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1217 19:24:51.327034  377556 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1217 19:24:51.334664  377556 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1217 19:24:51.343069  377556 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1217 19:24:51.343189  377556 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1217 19:24:51.350569  377556 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1217 19:24:51.358457  377556 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1217 19:24:51.358527  377556 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1217 19:24:51.366337  377556 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1217 19:24:51.373916  377556 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1217 19:24:51.373983  377556 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1217 19:24:51.381336  377556 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1217 19:24:51.448229  377556 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1045-gcp\n", err: exit status 1
	I1217 19:24:51.510191  377556 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1217 19:25:00.926187  377556 kubeadm.go:319] [init] Using Kubernetes version: v1.34.3
	I1217 19:25:00.926245  377556 kubeadm.go:319] [preflight] Running pre-flight checks
	I1217 19:25:00.926368  377556 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1217 19:25:00.926431  377556 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1045-gcp
	I1217 19:25:00.926462  377556 kubeadm.go:319] OS: Linux
	I1217 19:25:00.926530  377556 kubeadm.go:319] CGROUPS_CPU: enabled
	I1217 19:25:00.926604  377556 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1217 19:25:00.926674  377556 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1217 19:25:00.926742  377556 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1217 19:25:00.926807  377556 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1217 19:25:00.926879  377556 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1217 19:25:00.926950  377556 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1217 19:25:00.927023  377556 kubeadm.go:319] CGROUPS_IO: enabled
	I1217 19:25:00.927143  377556 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1217 19:25:00.927236  377556 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1217 19:25:00.927315  377556 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1217 19:25:00.927374  377556 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1217 19:25:00.930139  377556 out.go:252]   - Generating certificates and keys ...
	I1217 19:25:00.930216  377556 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1217 19:25:00.930271  377556 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1217 19:25:00.930325  377556 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1217 19:25:00.930408  377556 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1217 19:25:00.930500  377556 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1217 19:25:00.930577  377556 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1217 19:25:00.930667  377556 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1217 19:25:00.930806  377556 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-695107 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1217 19:25:00.930876  377556 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1217 19:25:00.931013  377556 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-695107 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1217 19:25:00.931163  377556 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1217 19:25:00.931245  377556 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1217 19:25:00.931305  377556 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1217 19:25:00.931394  377556 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1217 19:25:00.931447  377556 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1217 19:25:00.931492  377556 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1217 19:25:00.931538  377556 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1217 19:25:00.931590  377556 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1217 19:25:00.931633  377556 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1217 19:25:00.931698  377556 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1217 19:25:00.931751  377556 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1217 19:25:00.932955  377556 out.go:252]   - Booting up control plane ...
	I1217 19:25:00.933036  377556 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1217 19:25:00.933131  377556 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1217 19:25:00.933212  377556 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1217 19:25:00.933336  377556 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1217 19:25:00.933447  377556 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1217 19:25:00.933531  377556 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1217 19:25:00.933604  377556 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1217 19:25:00.933673  377556 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1217 19:25:00.933818  377556 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1217 19:25:00.933948  377556 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1217 19:25:00.934041  377556 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 501.891231ms
	I1217 19:25:00.934184  377556 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1217 19:25:00.934298  377556 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1217 19:25:00.934423  377556 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1217 19:25:00.934525  377556 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1217 19:25:00.934629  377556 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.005586878s
	I1217 19:25:00.934716  377556 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.149012348s
	I1217 19:25:00.934803  377556 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.001316036s
	I1217 19:25:00.934928  377556 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1217 19:25:00.935048  377556 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1217 19:25:00.935107  377556 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1217 19:25:00.935292  377556 kubeadm.go:319] [mark-control-plane] Marking the node addons-695107 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1217 19:25:00.935388  377556 kubeadm.go:319] [bootstrap-token] Using token: qz59t1.jmxpy6ch9p6pe8xc
	I1217 19:25:00.936700  377556 out.go:252]   - Configuring RBAC rules ...
	I1217 19:25:00.936803  377556 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1217 19:25:00.936902  377556 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1217 19:25:00.937124  377556 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1217 19:25:00.937359  377556 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1217 19:25:00.937524  377556 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1217 19:25:00.937664  377556 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1217 19:25:00.937869  377556 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1217 19:25:00.937940  377556 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1217 19:25:00.938010  377556 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1217 19:25:00.938021  377556 kubeadm.go:319] 
	I1217 19:25:00.938123  377556 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1217 19:25:00.938135  377556 kubeadm.go:319] 
	I1217 19:25:00.938244  377556 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1217 19:25:00.938265  377556 kubeadm.go:319] 
	I1217 19:25:00.938308  377556 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1217 19:25:00.938400  377556 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1217 19:25:00.938484  377556 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1217 19:25:00.938494  377556 kubeadm.go:319] 
	I1217 19:25:00.938573  377556 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1217 19:25:00.938586  377556 kubeadm.go:319] 
	I1217 19:25:00.938658  377556 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1217 19:25:00.938668  377556 kubeadm.go:319] 
	I1217 19:25:00.938744  377556 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1217 19:25:00.938858  377556 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1217 19:25:00.938946  377556 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1217 19:25:00.938956  377556 kubeadm.go:319] 
	I1217 19:25:00.939113  377556 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1217 19:25:00.939224  377556 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1217 19:25:00.939235  377556 kubeadm.go:319] 
	I1217 19:25:00.939366  377556 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token qz59t1.jmxpy6ch9p6pe8xc \
	I1217 19:25:00.939491  377556 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:8ef867ecc15c7bd9eb9f87ba84e4b5e1f9c90bbe1fbebab60bd7b5b08cd9129f \
	I1217 19:25:00.939532  377556 kubeadm.go:319] 	--control-plane 
	I1217 19:25:00.939553  377556 kubeadm.go:319] 
	I1217 19:25:00.939659  377556 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1217 19:25:00.939682  377556 kubeadm.go:319] 
	I1217 19:25:00.939803  377556 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token qz59t1.jmxpy6ch9p6pe8xc \
	I1217 19:25:00.939966  377556 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:8ef867ecc15c7bd9eb9f87ba84e4b5e1f9c90bbe1fbebab60bd7b5b08cd9129f 
	I1217 19:25:00.939988  377556 cni.go:84] Creating CNI manager for ""
	I1217 19:25:00.940000  377556 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1217 19:25:00.941573  377556 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1217 19:25:00.942627  377556 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1217 19:25:00.947061  377556 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.3/kubectl ...
	I1217 19:25:00.947087  377556 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2620 bytes)
	I1217 19:25:00.961366  377556 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1217 19:25:01.168583  377556 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1217 19:25:01.168674  377556 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 19:25:01.168674  377556 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-695107 minikube.k8s.io/updated_at=2025_12_17T19_25_01_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=2e96f676eb7e96389e85fe0658a4ede4c4ba6924 minikube.k8s.io/name=addons-695107 minikube.k8s.io/primary=true
	I1217 19:25:01.178515  377556 ops.go:34] apiserver oom_adj: -16
	I1217 19:25:01.267311  377556 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 19:25:01.767932  377556 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 19:25:02.268089  377556 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 19:25:02.768056  377556 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 19:25:03.268102  377556 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 19:25:03.768303  377556 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 19:25:04.268304  377556 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 19:25:04.768278  377556 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 19:25:05.268167  377556 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 19:25:05.768314  377556 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 19:25:05.836097  377556 kubeadm.go:1114] duration metric: took 4.66747042s to wait for elevateKubeSystemPrivileges
	I1217 19:25:05.836142  377556 kubeadm.go:403] duration metric: took 14.570903914s to StartCluster
	I1217 19:25:05.836172  377556 settings.go:142] acquiring lock: {Name:mk01c60672ff2b8f50b037d6096a0a4590636830 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 19:25:05.836291  377556 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22186-372245/kubeconfig
	I1217 19:25:05.836690  377556 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-372245/kubeconfig: {Name:mkbe8926b9014d2af611aee93b1188b72880b6c1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 19:25:05.836915  377556 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1217 19:25:05.836935  377556 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1217 19:25:05.836995  377556 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1217 19:25:05.837155  377556 addons.go:70] Setting yakd=true in profile "addons-695107"
	I1217 19:25:05.837165  377556 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-695107"
	I1217 19:25:05.837183  377556 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-695107"
	I1217 19:25:05.837188  377556 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-695107"
	I1217 19:25:05.837212  377556 addons.go:70] Setting volumesnapshots=true in profile "addons-695107"
	I1217 19:25:05.837221  377556 config.go:182] Loaded profile config "addons-695107": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 19:25:05.837228  377556 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-695107"
	I1217 19:25:05.837229  377556 addons.go:239] Setting addon volumesnapshots=true in "addons-695107"
	I1217 19:25:05.837237  377556 addons.go:70] Setting registry-creds=true in profile "addons-695107"
	I1217 19:25:05.837264  377556 host.go:66] Checking if "addons-695107" exists ...
	I1217 19:25:05.837265  377556 host.go:66] Checking if "addons-695107" exists ...
	I1217 19:25:05.837175  377556 addons.go:239] Setting addon yakd=true in "addons-695107"
	I1217 19:25:05.837248  377556 addons.go:70] Setting registry=true in profile "addons-695107"
	I1217 19:25:05.837204  377556 addons.go:70] Setting volcano=true in profile "addons-695107"
	I1217 19:25:05.837290  377556 addons.go:239] Setting addon registry=true in "addons-695107"
	I1217 19:25:05.837295  377556 host.go:66] Checking if "addons-695107" exists ...
	I1217 19:25:05.837309  377556 addons.go:239] Setting addon volcano=true in "addons-695107"
	I1217 19:25:05.837322  377556 host.go:66] Checking if "addons-695107" exists ...
	I1217 19:25:05.837328  377556 host.go:66] Checking if "addons-695107" exists ...
	I1217 19:25:05.837526  377556 addons.go:70] Setting ingress-dns=true in profile "addons-695107"
	I1217 19:25:05.837692  377556 addons.go:239] Setting addon ingress-dns=true in "addons-695107"
	I1217 19:25:05.837217  377556 addons.go:70] Setting storage-provisioner=true in profile "addons-695107"
	I1217 19:25:05.837750  377556 addons.go:239] Setting addon storage-provisioner=true in "addons-695107"
	I1217 19:25:05.837778  377556 host.go:66] Checking if "addons-695107" exists ...
	I1217 19:25:05.837822  377556 cli_runner.go:164] Run: docker container inspect addons-695107 --format={{.State.Status}}
	I1217 19:25:05.837824  377556 cli_runner.go:164] Run: docker container inspect addons-695107 --format={{.State.Status}}
	I1217 19:25:05.837829  377556 cli_runner.go:164] Run: docker container inspect addons-695107 --format={{.State.Status}}
	I1217 19:25:05.837829  377556 cli_runner.go:164] Run: docker container inspect addons-695107 --format={{.State.Status}}
	I1217 19:25:05.837197  377556 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-695107"
	I1217 19:25:05.837957  377556 host.go:66] Checking if "addons-695107" exists ...
	I1217 19:25:05.838212  377556 cli_runner.go:164] Run: docker container inspect addons-695107 --format={{.State.Status}}
	I1217 19:25:05.838259  377556 cli_runner.go:164] Run: docker container inspect addons-695107 --format={{.State.Status}}
	I1217 19:25:05.837580  377556 addons.go:70] Setting default-storageclass=true in profile "addons-695107"
	I1217 19:25:05.838682  377556 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-695107"
	I1217 19:25:05.838988  377556 cli_runner.go:164] Run: docker container inspect addons-695107 --format={{.State.Status}}
	I1217 19:25:05.839205  377556 cli_runner.go:164] Run: docker container inspect addons-695107 --format={{.State.Status}}
	I1217 19:25:05.837613  377556 addons.go:70] Setting ingress=true in profile "addons-695107"
	I1217 19:25:05.839267  377556 addons.go:239] Setting addon ingress=true in "addons-695107"
	I1217 19:25:05.839309  377556 host.go:66] Checking if "addons-695107" exists ...
	I1217 19:25:05.837592  377556 addons.go:70] Setting metrics-server=true in profile "addons-695107"
	I1217 19:25:05.839410  377556 addons.go:239] Setting addon metrics-server=true in "addons-695107"
	I1217 19:25:05.839441  377556 host.go:66] Checking if "addons-695107" exists ...
	I1217 19:25:05.839929  377556 cli_runner.go:164] Run: docker container inspect addons-695107 --format={{.State.Status}}
	I1217 19:25:05.837575  377556 addons.go:70] Setting gcp-auth=true in profile "addons-695107"
	I1217 19:25:05.841012  377556 mustload.go:66] Loading cluster: addons-695107
	I1217 19:25:05.837602  377556 addons.go:70] Setting inspektor-gadget=true in profile "addons-695107"
	I1217 19:25:05.841056  377556 addons.go:239] Setting addon inspektor-gadget=true in "addons-695107"
	I1217 19:25:05.841106  377556 host.go:66] Checking if "addons-695107" exists ...
	I1217 19:25:05.841261  377556 config.go:182] Loaded profile config "addons-695107": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 19:25:05.841529  377556 cli_runner.go:164] Run: docker container inspect addons-695107 --format={{.State.Status}}
	I1217 19:25:05.841543  377556 cli_runner.go:164] Run: docker container inspect addons-695107 --format={{.State.Status}}
	I1217 19:25:05.837266  377556 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-695107"
	I1217 19:25:05.841964  377556 host.go:66] Checking if "addons-695107" exists ...
	I1217 19:25:05.837829  377556 cli_runner.go:164] Run: docker container inspect addons-695107 --format={{.State.Status}}
	I1217 19:25:05.837620  377556 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-695107"
	I1217 19:25:05.843986  377556 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-695107"
	I1217 19:25:05.844047  377556 host.go:66] Checking if "addons-695107" exists ...
	I1217 19:25:05.844655  377556 cli_runner.go:164] Run: docker container inspect addons-695107 --format={{.State.Status}}
	I1217 19:25:05.837660  377556 addons.go:239] Setting addon registry-creds=true in "addons-695107"
	I1217 19:25:05.837628  377556 addons.go:70] Setting cloud-spanner=true in profile "addons-695107"
	I1217 19:25:05.847213  377556 out.go:179] * Verifying Kubernetes components...
	I1217 19:25:05.848193  377556 host.go:66] Checking if "addons-695107" exists ...
	I1217 19:25:05.848450  377556 addons.go:239] Setting addon cloud-spanner=true in "addons-695107"
	I1217 19:25:05.848491  377556 host.go:66] Checking if "addons-695107" exists ...
	I1217 19:25:05.848716  377556 cli_runner.go:164] Run: docker container inspect addons-695107 --format={{.State.Status}}
	I1217 19:25:05.849903  377556 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 19:25:05.850631  377556 cli_runner.go:164] Run: docker container inspect addons-695107 --format={{.State.Status}}
	I1217 19:25:05.850770  377556 cli_runner.go:164] Run: docker container inspect addons-695107 --format={{.State.Status}}
	I1217 19:25:05.852239  377556 cli_runner.go:164] Run: docker container inspect addons-695107 --format={{.State.Status}}
	I1217 19:25:05.911067  377556 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1217 19:25:05.914710  377556 out.go:179]   - Using image docker.io/registry:3.0.0
	I1217 19:25:05.918119  377556 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.6
	I1217 19:25:05.918545  377556 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
	I1217 19:25:05.918570  377556 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1217 19:25:05.918636  377556 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-695107
	I1217 19:25:05.919696  377556 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1217 19:25:05.919768  377556 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1217 19:25:05.919853  377556 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-695107
	I1217 19:25:05.921728  377556 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1217 19:25:05.923193  377556 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1217 19:25:05.923215  377556 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1217 19:25:05.923288  377556 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-695107
	I1217 19:25:05.923658  377556 addons.go:239] Setting addon default-storageclass=true in "addons-695107"
	I1217 19:25:05.923710  377556 host.go:66] Checking if "addons-695107" exists ...
	I1217 19:25:05.924250  377556 cli_runner.go:164] Run: docker container inspect addons-695107 --format={{.State.Status}}
	I1217 19:25:05.931850  377556 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1217 19:25:05.931930  377556 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.45
	W1217 19:25:05.932354  377556 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1217 19:25:05.933236  377556 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
	I1217 19:25:05.933255  377556 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1217 19:25:05.933313  377556 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-695107
	I1217 19:25:05.933886  377556 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1217 19:25:05.934569  377556 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1217 19:25:05.934727  377556 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-695107
	I1217 19:25:05.941047  377556 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-695107"
	I1217 19:25:05.941118  377556 host.go:66] Checking if "addons-695107" exists ...
	I1217 19:25:05.934616  377556 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1217 19:25:05.942982  377556 cli_runner.go:164] Run: docker container inspect addons-695107 --format={{.State.Status}}
	I1217 19:25:05.945897  377556 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.1
	I1217 19:25:05.945956  377556 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1217 19:25:05.946046  377556 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1217 19:25:05.946149  377556 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 19:25:05.946975  377556 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1217 19:25:05.947052  377556 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-695107
	I1217 19:25:05.947366  377556 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1217 19:25:05.947385  377556 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1217 19:25:05.947387  377556 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1217 19:25:05.947404  377556 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1217 19:25:05.947434  377556 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-695107
	I1217 19:25:05.947457  377556 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-695107
	I1217 19:25:05.948099  377556 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1217 19:25:05.948117  377556 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1217 19:25:05.948161  377556 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-695107
	I1217 19:25:05.959889  377556 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1217 19:25:05.965754  377556 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1217 19:25:05.966859  377556 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1217 19:25:05.966886  377556 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1217 19:25:05.966956  377556 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-695107
	I1217 19:25:05.968444  377556 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1217 19:25:05.970229  377556 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1217 19:25:05.971667  377556 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.14.1
	I1217 19:25:05.972871  377556 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1217 19:25:05.972936  377556 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I1217 19:25:05.974272  377556 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I1217 19:25:05.974333  377556 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1217 19:25:05.978357  377556 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1217 19:25:05.978392  377556 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1217 19:25:05.978454  377556 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-695107
	I1217 19:25:05.979960  377556 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1217 19:25:05.980667  377556 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1217 19:25:05.980938  377556 host.go:66] Checking if "addons-695107" exists ...
	I1217 19:25:05.981112  377556 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22186-372245/.minikube/machines/addons-695107/id_rsa Username:docker}
	I1217 19:25:05.982258  377556 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1217 19:25:05.983974  377556 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1217 19:25:05.985161  377556 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1217 19:25:05.985185  377556 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1217 19:25:05.985255  377556 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-695107
	I1217 19:25:06.011979  377556 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.47.0
	I1217 19:25:06.013263  377556 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22186-372245/.minikube/machines/addons-695107/id_rsa Username:docker}
	I1217 19:25:06.013832  377556 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1217 19:25:06.013910  377556 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1217 19:25:06.014033  377556 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-695107
	I1217 19:25:06.014718  377556 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22186-372245/.minikube/machines/addons-695107/id_rsa Username:docker}
	I1217 19:25:06.019357  377556 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22186-372245/.minikube/machines/addons-695107/id_rsa Username:docker}
	I1217 19:25:06.023295  377556 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22186-372245/.minikube/machines/addons-695107/id_rsa Username:docker}
	I1217 19:25:06.025182  377556 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22186-372245/.minikube/machines/addons-695107/id_rsa Username:docker}
	I1217 19:25:06.032141  377556 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22186-372245/.minikube/machines/addons-695107/id_rsa Username:docker}
	I1217 19:25:06.035254  377556 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22186-372245/.minikube/machines/addons-695107/id_rsa Username:docker}
	I1217 19:25:06.035767  377556 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22186-372245/.minikube/machines/addons-695107/id_rsa Username:docker}
	I1217 19:25:06.039058  377556 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1217 19:25:06.039924  377556 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1217 19:25:06.039944  377556 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1217 19:25:06.040014  377556 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-695107
	I1217 19:25:06.043652  377556 out.go:179]   - Using image docker.io/busybox:stable
	I1217 19:25:06.050818  377556 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1217 19:25:06.050843  377556 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1217 19:25:06.050912  377556 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-695107
	I1217 19:25:06.053628  377556 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22186-372245/.minikube/machines/addons-695107/id_rsa Username:docker}
	I1217 19:25:06.055242  377556 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22186-372245/.minikube/machines/addons-695107/id_rsa Username:docker}
	I1217 19:25:06.057119  377556 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22186-372245/.minikube/machines/addons-695107/id_rsa Username:docker}
	W1217 19:25:06.060510  377556 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1217 19:25:06.060658  377556 retry.go:31] will retry after 350.294268ms: ssh: handshake failed: EOF
	I1217 19:25:06.075670  377556 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22186-372245/.minikube/machines/addons-695107/id_rsa Username:docker}
	I1217 19:25:06.092660  377556 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22186-372245/.minikube/machines/addons-695107/id_rsa Username:docker}
	W1217 19:25:06.099148  377556 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1217 19:25:06.099180  377556 retry.go:31] will retry after 144.182296ms: ssh: handshake failed: EOF
	I1217 19:25:06.104729  377556 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22186-372245/.minikube/machines/addons-695107/id_rsa Username:docker}
	W1217 19:25:06.108930  377556 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1217 19:25:06.108959  377556 retry.go:31] will retry after 296.035682ms: ssh: handshake failed: EOF
	I1217 19:25:06.113860  377556 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 19:25:06.191340  377556 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1217 19:25:06.191368  377556 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1217 19:25:06.200312  377556 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1217 19:25:06.207990  377556 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1217 19:25:06.216064  377556 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1217 19:25:06.216117  377556 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1217 19:25:06.220673  377556 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1217 19:25:06.220697  377556 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1217 19:25:06.233339  377556 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1217 19:25:06.233935  377556 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1217 19:25:06.244295  377556 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 19:25:06.244883  377556 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1217 19:25:06.245159  377556 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1217 19:25:06.255307  377556 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1217 19:25:06.255336  377556 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1217 19:25:06.258354  377556 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
	I1217 19:25:06.258377  377556 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1217 19:25:06.260855  377556 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1217 19:25:06.260883  377556 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1217 19:25:06.267551  377556 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml
	I1217 19:25:06.276693  377556 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1217 19:25:06.276741  377556 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1217 19:25:06.278671  377556 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1217 19:25:06.286450  377556 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1217 19:25:06.286472  377556 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1217 19:25:06.309616  377556 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1217 19:25:06.310231  377556 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1217 19:25:06.310256  377556 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1217 19:25:06.313625  377556 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1217 19:25:06.313652  377556 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1217 19:25:06.330896  377556 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1217 19:25:06.330935  377556 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1217 19:25:06.357698  377556 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1217 19:25:06.361915  377556 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1217 19:25:06.361946  377556 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1217 19:25:06.366448  377556 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1217 19:25:06.366487  377556 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1217 19:25:06.391773  377556 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1217 19:25:06.391806  377556 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1217 19:25:06.416797  377556 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1217 19:25:06.416836  377556 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1217 19:25:06.425379  377556 node_ready.go:35] waiting up to 6m0s for node "addons-695107" to be "Ready" ...
	I1217 19:25:06.425666  377556 start.go:977] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1217 19:25:06.447332  377556 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1217 19:25:06.447360  377556 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1217 19:25:06.469277  377556 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1217 19:25:06.469308  377556 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1217 19:25:06.494810  377556 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1217 19:25:06.512413  377556 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1217 19:25:06.517859  377556 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1217 19:25:06.542186  377556 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1217 19:25:06.542220  377556 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1217 19:25:06.612089  377556 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1217 19:25:06.612116  377556 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1217 19:25:06.676652  377556 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1217 19:25:06.676688  377556 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1217 19:25:06.686201  377556 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1217 19:25:06.696617  377556 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1217 19:25:06.766425  377556 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1217 19:25:06.766452  377556 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1217 19:25:06.832480  377556 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1217 19:25:06.832511  377556 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1217 19:25:06.911292  377556 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1217 19:25:06.911327  377556 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1217 19:25:06.950756  377556 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-695107" context rescaled to 1 replicas
	I1217 19:25:06.989556  377556 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1217 19:25:07.763061  377556 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (1.529039341s)
	I1217 19:25:07.763144  377556 addons.go:495] Verifying addon ingress=true in "addons-695107"
	I1217 19:25:07.763210  377556 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.518605361s)
	I1217 19:25:07.763343  377556 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml: (1.495767329s)
	I1217 19:25:07.763387  377556 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (1.484695807s)
	I1217 19:25:07.763467  377556 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.453815712s)
	I1217 19:25:07.763500  377556 addons.go:495] Verifying addon metrics-server=true in "addons-695107"
	I1217 19:25:07.763748  377556 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (1.406009372s)
	I1217 19:25:07.763901  377556 addons.go:495] Verifying addon registry=true in "addons-695107"
	I1217 19:25:07.763836  377556 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.268996057s)
	I1217 19:25:07.764822  377556 out.go:179] * Verifying ingress addon...
	I1217 19:25:07.765646  377556 out.go:179] * Verifying registry addon...
	I1217 19:25:07.767373  377556 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1217 19:25:07.769154  377556 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1217 19:25:07.770884  377556 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1217 19:25:07.770903  377556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:25:07.772190  377556 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1217 19:25:07.772209  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 19:25:08.238972  377556 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.726502222s)
	I1217 19:25:08.239005  377556 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (1.721110576s)
	W1217 19:25:08.239029  377556 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1217 19:25:08.239055  377556 retry.go:31] will retry after 347.64053ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1217 19:25:08.239063  377556 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (1.552823259s)
	I1217 19:25:08.239135  377556 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (1.542491348s)
	I1217 19:25:08.239326  377556 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (1.249729211s)
	I1217 19:25:08.239358  377556 addons.go:495] Verifying addon csi-hostpath-driver=true in "addons-695107"
	I1217 19:25:08.241573  377556 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-695107 service yakd-dashboard -n yakd-dashboard
	
	I1217 19:25:08.241575  377556 out.go:179] * Verifying csi-hostpath-driver addon...
	I1217 19:25:08.244150  377556 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1217 19:25:08.247169  377556 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1217 19:25:08.247200  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:25:08.270188  377556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:25:08.271612  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1217 19:25:08.429207  377556 node_ready.go:57] node "addons-695107" has "Ready":"False" status (will retry)
	I1217 19:25:08.586889  377556 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1217 19:25:08.747411  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:25:08.771668  377556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:25:08.771820  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 19:25:09.249300  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:25:09.271220  377556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:25:09.271820  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 19:25:09.748069  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:25:09.770918  377556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:25:09.771282  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 19:25:10.247620  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:25:10.270110  377556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:25:10.271665  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 19:25:10.747782  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:25:10.770647  377556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:25:10.772274  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1217 19:25:10.928779  377556 node_ready.go:57] node "addons-695107" has "Ready":"False" status (will retry)
	I1217 19:25:11.073809  377556 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.486867337s)
	I1217 19:25:11.248435  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:25:11.271238  377556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:25:11.271872  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 19:25:11.747561  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:25:11.770588  377556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:25:11.771968  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 19:25:12.247606  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:25:12.270259  377556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:25:12.272741  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 19:25:12.747945  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:25:12.771045  377556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:25:12.771513  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1217 19:25:12.928993  377556 node_ready.go:57] node "addons-695107" has "Ready":"False" status (will retry)
	I1217 19:25:13.248125  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:25:13.270716  377556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:25:13.271576  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 19:25:13.587667  377556 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1217 19:25:13.587772  377556 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-695107
	I1217 19:25:13.606614  377556 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22186-372245/.minikube/machines/addons-695107/id_rsa Username:docker}
	I1217 19:25:13.722163  377556 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1217 19:25:13.735841  377556 addons.go:239] Setting addon gcp-auth=true in "addons-695107"
	I1217 19:25:13.735895  377556 host.go:66] Checking if "addons-695107" exists ...
	I1217 19:25:13.736332  377556 cli_runner.go:164] Run: docker container inspect addons-695107 --format={{.State.Status}}
	I1217 19:25:13.747930  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:25:13.754684  377556 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1217 19:25:13.754737  377556 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-695107
	I1217 19:25:13.771176  377556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:25:13.772406  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 19:25:13.773445  377556 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22186-372245/.minikube/machines/addons-695107/id_rsa Username:docker}
	I1217 19:25:13.874256  377556 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I1217 19:25:13.875702  377556 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1217 19:25:13.876906  377556 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1217 19:25:13.876922  377556 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1217 19:25:13.890953  377556 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1217 19:25:13.890991  377556 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1217 19:25:13.904131  377556 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1217 19:25:13.904153  377556 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1217 19:25:13.916609  377556 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1217 19:25:14.232995  377556 addons.go:495] Verifying addon gcp-auth=true in "addons-695107"
	I1217 19:25:14.234279  377556 out.go:179] * Verifying gcp-auth addon...
	I1217 19:25:14.238188  377556 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1217 19:25:14.240428  377556 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1217 19:25:14.240449  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:25:14.247197  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:25:14.348473  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 19:25:14.348576  377556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:25:14.741125  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:25:14.747412  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:25:14.771549  377556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:25:14.771726  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 19:25:15.242454  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:25:15.247175  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:25:15.271058  377556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:25:15.271484  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1217 19:25:15.428547  377556 node_ready.go:57] node "addons-695107" has "Ready":"False" status (will retry)
	I1217 19:25:15.741692  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:25:15.747053  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:25:15.770961  377556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:25:15.772388  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 19:25:16.242101  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:25:16.247617  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:25:16.270705  377556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:25:16.272242  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 19:25:16.741533  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:25:16.747052  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:25:16.771472  377556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:25:16.772510  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 19:25:17.242013  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:25:17.247595  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:25:17.270551  377556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:25:17.271968  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1217 19:25:17.428774  377556 node_ready.go:57] node "addons-695107" has "Ready":"False" status (will retry)
	I1217 19:25:17.741780  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:25:17.746915  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:25:17.770952  377556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:25:17.772610  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 19:25:18.241407  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:25:18.246968  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:25:18.271024  377556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:25:18.272199  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 19:25:18.742705  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:25:18.763416  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:25:18.783002  377556 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1217 19:25:18.783031  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 19:25:18.783901  377556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:25:18.927917  377556 node_ready.go:49] node "addons-695107" is "Ready"
	I1217 19:25:18.927952  377556 node_ready.go:38] duration metric: took 12.502528031s for node "addons-695107" to be "Ready" ...
	I1217 19:25:18.927991  377556 api_server.go:52] waiting for apiserver process to appear ...
	I1217 19:25:18.928059  377556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 19:25:18.942890  377556 api_server.go:72] duration metric: took 13.105919689s to wait for apiserver process to appear ...
	I1217 19:25:18.942923  377556 api_server.go:88] waiting for apiserver healthz status ...
	I1217 19:25:18.942956  377556 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1217 19:25:18.947693  377556 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1217 19:25:18.948672  377556 api_server.go:141] control plane version: v1.34.3
	I1217 19:25:18.948699  377556 api_server.go:131] duration metric: took 5.769192ms to wait for apiserver health ...
	I1217 19:25:18.948711  377556 system_pods.go:43] waiting for kube-system pods to appear ...
	I1217 19:25:18.953823  377556 system_pods.go:59] 20 kube-system pods found
	I1217 19:25:18.953886  377556 system_pods.go:61] "amd-gpu-device-plugin-xl62h" [e36b51fd-d2b7-4d84-92fd-3f234d68f8f8] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1217 19:25:18.953913  377556 system_pods.go:61] "coredns-66bc5c9577-gqcjx" [22d6cc15-657e-4859-9aaf-1584f8ce161d] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 19:25:18.953928  377556 system_pods.go:61] "csi-hostpath-attacher-0" [b969567d-e5f9-4e6d-a303-02db0e756eec] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1217 19:25:18.953937  377556 system_pods.go:61] "csi-hostpath-resizer-0" [e4954ec6-aa54-40c6-9c84-70287d004936] Pending
	I1217 19:25:18.953947  377556 system_pods.go:61] "csi-hostpathplugin-j4557" [971e8c2b-7ddd-4d3f-84f8-e3a736f466b4] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1217 19:25:18.953955  377556 system_pods.go:61] "etcd-addons-695107" [70b11d43-62b9-4529-9f0c-8307f62e449c] Running
	I1217 19:25:18.953961  377556 system_pods.go:61] "kindnet-dkw9t" [b177cd3a-1117-4c7f-b24d-8872ec987afc] Running
	I1217 19:25:18.953970  377556 system_pods.go:61] "kube-apiserver-addons-695107" [b26057f7-6504-4cab-beba-289a4ebc7ca5] Running
	I1217 19:25:18.953975  377556 system_pods.go:61] "kube-controller-manager-addons-695107" [5bb06b51-7d12-402c-bd06-507791a2d2a5] Running
	I1217 19:25:18.953988  377556 system_pods.go:61] "kube-ingress-dns-minikube" [8033a74b-624e-496a-a8e1-f1e3a179e00d] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1217 19:25:18.953997  377556 system_pods.go:61] "kube-proxy-fqlbd" [a49f20f8-88d7-43f9-9616-20d6b8e3f194] Running
	I1217 19:25:18.954003  377556 system_pods.go:61] "kube-scheduler-addons-695107" [01d6c101-5044-4481-aeb2-45cca581927b] Running
	I1217 19:25:18.954013  377556 system_pods.go:61] "metrics-server-85b7d694d7-tqbbx" [f8c2c133-1dbb-4007-8e9f-dbd891b5c4e1] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1217 19:25:18.954024  377556 system_pods.go:61] "nvidia-device-plugin-daemonset-5hdv7" [2bc6b0b1-2270-4abe-b5d5-2dc24f542121] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1217 19:25:18.954032  377556 system_pods.go:61] "registry-6b586f9694-2jvdr" [d850b7ca-185a-40a6-bd67-035ed864cc70] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1217 19:25:18.954044  377556 system_pods.go:61] "registry-creds-764b6fb674-lglwq" [58c8feae-1fa3-4ac5-b69e-212b116a2c16] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1217 19:25:18.954050  377556 system_pods.go:61] "registry-proxy-8dlbt" [2eed962d-54b9-4a44-a7d8-38bf999b5d29] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1217 19:25:18.954054  377556 system_pods.go:61] "snapshot-controller-7d9fbc56b8-fgm6r" [e670abf9-bb25-4083-85d2-67fd2aa6d734] Pending
	I1217 19:25:18.954061  377556 system_pods.go:61] "snapshot-controller-7d9fbc56b8-pvnhq" [ddc2a00e-7044-4134-a0e5-a9ce980af62e] Pending
	I1217 19:25:18.954065  377556 system_pods.go:61] "storage-provisioner" [58d8e209-60d8-4105-bd32-336cde196461] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1217 19:25:18.954085  377556 system_pods.go:74] duration metric: took 5.356317ms to wait for pod list to return data ...
	I1217 19:25:18.954095  377556 default_sa.go:34] waiting for default service account to be created ...
	I1217 19:25:18.956186  377556 default_sa.go:45] found service account: "default"
	I1217 19:25:18.956207  377556 default_sa.go:55] duration metric: took 2.103899ms for default service account to be created ...
	I1217 19:25:18.956218  377556 system_pods.go:116] waiting for k8s-apps to be running ...
	I1217 19:25:18.959351  377556 system_pods.go:86] 20 kube-system pods found
	I1217 19:25:18.959390  377556 system_pods.go:89] "amd-gpu-device-plugin-xl62h" [e36b51fd-d2b7-4d84-92fd-3f234d68f8f8] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1217 19:25:18.959402  377556 system_pods.go:89] "coredns-66bc5c9577-gqcjx" [22d6cc15-657e-4859-9aaf-1584f8ce161d] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 19:25:18.959416  377556 system_pods.go:89] "csi-hostpath-attacher-0" [b969567d-e5f9-4e6d-a303-02db0e756eec] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1217 19:25:18.959423  377556 system_pods.go:89] "csi-hostpath-resizer-0" [e4954ec6-aa54-40c6-9c84-70287d004936] Pending
	I1217 19:25:18.959432  377556 system_pods.go:89] "csi-hostpathplugin-j4557" [971e8c2b-7ddd-4d3f-84f8-e3a736f466b4] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1217 19:25:18.959441  377556 system_pods.go:89] "etcd-addons-695107" [70b11d43-62b9-4529-9f0c-8307f62e449c] Running
	I1217 19:25:18.959448  377556 system_pods.go:89] "kindnet-dkw9t" [b177cd3a-1117-4c7f-b24d-8872ec987afc] Running
	I1217 19:25:18.959460  377556 system_pods.go:89] "kube-apiserver-addons-695107" [b26057f7-6504-4cab-beba-289a4ebc7ca5] Running
	I1217 19:25:18.959467  377556 system_pods.go:89] "kube-controller-manager-addons-695107" [5bb06b51-7d12-402c-bd06-507791a2d2a5] Running
	I1217 19:25:18.959479  377556 system_pods.go:89] "kube-ingress-dns-minikube" [8033a74b-624e-496a-a8e1-f1e3a179e00d] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1217 19:25:18.959484  377556 system_pods.go:89] "kube-proxy-fqlbd" [a49f20f8-88d7-43f9-9616-20d6b8e3f194] Running
	I1217 19:25:18.959492  377556 system_pods.go:89] "kube-scheduler-addons-695107" [01d6c101-5044-4481-aeb2-45cca581927b] Running
	I1217 19:25:18.959497  377556 system_pods.go:89] "metrics-server-85b7d694d7-tqbbx" [f8c2c133-1dbb-4007-8e9f-dbd891b5c4e1] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1217 19:25:18.959506  377556 system_pods.go:89] "nvidia-device-plugin-daemonset-5hdv7" [2bc6b0b1-2270-4abe-b5d5-2dc24f542121] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1217 19:25:18.959513  377556 system_pods.go:89] "registry-6b586f9694-2jvdr" [d850b7ca-185a-40a6-bd67-035ed864cc70] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1217 19:25:18.959520  377556 system_pods.go:89] "registry-creds-764b6fb674-lglwq" [58c8feae-1fa3-4ac5-b69e-212b116a2c16] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1217 19:25:18.959527  377556 system_pods.go:89] "registry-proxy-8dlbt" [2eed962d-54b9-4a44-a7d8-38bf999b5d29] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1217 19:25:18.959536  377556 system_pods.go:89] "snapshot-controller-7d9fbc56b8-fgm6r" [e670abf9-bb25-4083-85d2-67fd2aa6d734] Pending
	I1217 19:25:18.959542  377556 system_pods.go:89] "snapshot-controller-7d9fbc56b8-pvnhq" [ddc2a00e-7044-4134-a0e5-a9ce980af62e] Pending
	I1217 19:25:18.959552  377556 system_pods.go:89] "storage-provisioner" [58d8e209-60d8-4105-bd32-336cde196461] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1217 19:25:18.959573  377556 retry.go:31] will retry after 237.14182ms: missing components: kube-dns
	I1217 19:25:19.211050  377556 system_pods.go:86] 20 kube-system pods found
	I1217 19:25:19.211118  377556 system_pods.go:89] "amd-gpu-device-plugin-xl62h" [e36b51fd-d2b7-4d84-92fd-3f234d68f8f8] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1217 19:25:19.211129  377556 system_pods.go:89] "coredns-66bc5c9577-gqcjx" [22d6cc15-657e-4859-9aaf-1584f8ce161d] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 19:25:19.211138  377556 system_pods.go:89] "csi-hostpath-attacher-0" [b969567d-e5f9-4e6d-a303-02db0e756eec] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1217 19:25:19.211146  377556 system_pods.go:89] "csi-hostpath-resizer-0" [e4954ec6-aa54-40c6-9c84-70287d004936] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1217 19:25:19.211155  377556 system_pods.go:89] "csi-hostpathplugin-j4557" [971e8c2b-7ddd-4d3f-84f8-e3a736f466b4] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1217 19:25:19.211162  377556 system_pods.go:89] "etcd-addons-695107" [70b11d43-62b9-4529-9f0c-8307f62e449c] Running
	I1217 19:25:19.211169  377556 system_pods.go:89] "kindnet-dkw9t" [b177cd3a-1117-4c7f-b24d-8872ec987afc] Running
	I1217 19:25:19.211175  377556 system_pods.go:89] "kube-apiserver-addons-695107" [b26057f7-6504-4cab-beba-289a4ebc7ca5] Running
	I1217 19:25:19.211191  377556 system_pods.go:89] "kube-controller-manager-addons-695107" [5bb06b51-7d12-402c-bd06-507791a2d2a5] Running
	I1217 19:25:19.211200  377556 system_pods.go:89] "kube-ingress-dns-minikube" [8033a74b-624e-496a-a8e1-f1e3a179e00d] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1217 19:25:19.211211  377556 system_pods.go:89] "kube-proxy-fqlbd" [a49f20f8-88d7-43f9-9616-20d6b8e3f194] Running
	I1217 19:25:19.211218  377556 system_pods.go:89] "kube-scheduler-addons-695107" [01d6c101-5044-4481-aeb2-45cca581927b] Running
	I1217 19:25:19.211234  377556 system_pods.go:89] "metrics-server-85b7d694d7-tqbbx" [f8c2c133-1dbb-4007-8e9f-dbd891b5c4e1] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1217 19:25:19.211249  377556 system_pods.go:89] "nvidia-device-plugin-daemonset-5hdv7" [2bc6b0b1-2270-4abe-b5d5-2dc24f542121] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1217 19:25:19.211265  377556 system_pods.go:89] "registry-6b586f9694-2jvdr" [d850b7ca-185a-40a6-bd67-035ed864cc70] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1217 19:25:19.211273  377556 system_pods.go:89] "registry-creds-764b6fb674-lglwq" [58c8feae-1fa3-4ac5-b69e-212b116a2c16] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1217 19:25:19.211287  377556 system_pods.go:89] "registry-proxy-8dlbt" [2eed962d-54b9-4a44-a7d8-38bf999b5d29] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1217 19:25:19.211299  377556 system_pods.go:89] "snapshot-controller-7d9fbc56b8-fgm6r" [e670abf9-bb25-4083-85d2-67fd2aa6d734] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1217 19:25:19.211309  377556 system_pods.go:89] "snapshot-controller-7d9fbc56b8-pvnhq" [ddc2a00e-7044-4134-a0e5-a9ce980af62e] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1217 19:25:19.211316  377556 system_pods.go:89] "storage-provisioner" [58d8e209-60d8-4105-bd32-336cde196461] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1217 19:25:19.211340  377556 retry.go:31] will retry after 272.576358ms: missing components: kube-dns
	I1217 19:25:19.263619  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:25:19.263709  377556 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1217 19:25:19.263726  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:25:19.274862  377556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:25:19.275785  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 19:25:19.490624  377556 system_pods.go:86] 20 kube-system pods found
	I1217 19:25:19.490665  377556 system_pods.go:89] "amd-gpu-device-plugin-xl62h" [e36b51fd-d2b7-4d84-92fd-3f234d68f8f8] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1217 19:25:19.490677  377556 system_pods.go:89] "coredns-66bc5c9577-gqcjx" [22d6cc15-657e-4859-9aaf-1584f8ce161d] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 19:25:19.490689  377556 system_pods.go:89] "csi-hostpath-attacher-0" [b969567d-e5f9-4e6d-a303-02db0e756eec] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1217 19:25:19.490697  377556 system_pods.go:89] "csi-hostpath-resizer-0" [e4954ec6-aa54-40c6-9c84-70287d004936] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1217 19:25:19.490706  377556 system_pods.go:89] "csi-hostpathplugin-j4557" [971e8c2b-7ddd-4d3f-84f8-e3a736f466b4] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1217 19:25:19.490711  377556 system_pods.go:89] "etcd-addons-695107" [70b11d43-62b9-4529-9f0c-8307f62e449c] Running
	I1217 19:25:19.490717  377556 system_pods.go:89] "kindnet-dkw9t" [b177cd3a-1117-4c7f-b24d-8872ec987afc] Running
	I1217 19:25:19.490723  377556 system_pods.go:89] "kube-apiserver-addons-695107" [b26057f7-6504-4cab-beba-289a4ebc7ca5] Running
	I1217 19:25:19.490728  377556 system_pods.go:89] "kube-controller-manager-addons-695107" [5bb06b51-7d12-402c-bd06-507791a2d2a5] Running
	I1217 19:25:19.490750  377556 system_pods.go:89] "kube-ingress-dns-minikube" [8033a74b-624e-496a-a8e1-f1e3a179e00d] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1217 19:25:19.490757  377556 system_pods.go:89] "kube-proxy-fqlbd" [a49f20f8-88d7-43f9-9616-20d6b8e3f194] Running
	I1217 19:25:19.490764  377556 system_pods.go:89] "kube-scheduler-addons-695107" [01d6c101-5044-4481-aeb2-45cca581927b] Running
	I1217 19:25:19.490773  377556 system_pods.go:89] "metrics-server-85b7d694d7-tqbbx" [f8c2c133-1dbb-4007-8e9f-dbd891b5c4e1] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1217 19:25:19.490781  377556 system_pods.go:89] "nvidia-device-plugin-daemonset-5hdv7" [2bc6b0b1-2270-4abe-b5d5-2dc24f542121] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1217 19:25:19.490789  377556 system_pods.go:89] "registry-6b586f9694-2jvdr" [d850b7ca-185a-40a6-bd67-035ed864cc70] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1217 19:25:19.490797  377556 system_pods.go:89] "registry-creds-764b6fb674-lglwq" [58c8feae-1fa3-4ac5-b69e-212b116a2c16] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1217 19:25:19.490808  377556 system_pods.go:89] "registry-proxy-8dlbt" [2eed962d-54b9-4a44-a7d8-38bf999b5d29] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1217 19:25:19.490815  377556 system_pods.go:89] "snapshot-controller-7d9fbc56b8-fgm6r" [e670abf9-bb25-4083-85d2-67fd2aa6d734] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1217 19:25:19.490828  377556 system_pods.go:89] "snapshot-controller-7d9fbc56b8-pvnhq" [ddc2a00e-7044-4134-a0e5-a9ce980af62e] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1217 19:25:19.490833  377556 system_pods.go:89] "storage-provisioner" [58d8e209-60d8-4105-bd32-336cde196461] Running
	I1217 19:25:19.490847  377556 system_pods.go:126] duration metric: took 534.620447ms to wait for k8s-apps to be running ...
	I1217 19:25:19.490860  377556 system_svc.go:44] waiting for kubelet service to be running ....
	I1217 19:25:19.490919  377556 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 19:25:19.510563  377556 system_svc.go:56] duration metric: took 19.691657ms WaitForService to wait for kubelet
	I1217 19:25:19.510604  377556 kubeadm.go:587] duration metric: took 13.673636357s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1217 19:25:19.510631  377556 node_conditions.go:102] verifying NodePressure condition ...
	I1217 19:25:19.514637  377556 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1217 19:25:19.514671  377556 node_conditions.go:123] node cpu capacity is 8
	I1217 19:25:19.514695  377556 node_conditions.go:105] duration metric: took 4.054182ms to run NodePressure ...
	I1217 19:25:19.514717  377556 start.go:242] waiting for startup goroutines ...
	I1217 19:25:19.743485  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:25:19.748149  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:25:19.772213  377556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:25:19.773006  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 19:25:20.241714  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:25:20.247875  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:25:20.271286  377556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:25:20.272452  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 19:25:20.743039  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:25:20.748003  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:25:20.771555  377556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:25:20.772727  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 19:25:21.242433  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:25:21.246579  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:25:21.270925  377556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:25:21.272259  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 19:25:21.742861  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:25:21.747910  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:25:21.772565  377556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:25:21.773045  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 19:25:22.241595  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:25:22.247950  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:25:22.271230  377556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:25:22.273469  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 19:25:22.742139  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:25:22.748002  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:25:22.771274  377556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:25:22.772710  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 19:25:23.242039  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:25:23.248344  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:25:23.271824  377556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:25:23.272005  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 19:25:23.743018  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:25:23.748205  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:25:23.771571  377556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:25:23.772831  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 19:25:24.240954  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:25:24.247882  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:25:24.270537  377556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:25:24.272201  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 19:25:24.741825  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:25:24.747621  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:25:24.770832  377556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:25:24.772668  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 19:25:25.242284  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:25:25.248273  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:25:25.271495  377556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:25:25.271777  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 19:25:25.742351  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:25:25.746788  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:25:25.771099  377556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:25:25.772689  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 19:25:26.242477  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:25:26.247938  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:25:26.271020  377556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:25:26.273136  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 19:25:26.741881  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:25:26.747688  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:25:26.770790  377556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:25:26.772353  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 19:25:27.241663  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:25:27.247664  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:25:27.271630  377556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:25:27.272143  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 19:25:27.742199  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:25:27.748111  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:25:27.771990  377556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:25:27.773619  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 19:25:28.242509  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:25:28.247314  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:25:28.271471  377556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:25:28.272045  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 19:25:28.741900  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:25:28.747677  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:25:28.770771  377556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:25:28.772676  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 19:25:29.242751  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:25:29.249825  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:25:29.317130  377556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:25:29.317260  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 19:25:29.742959  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:25:29.748230  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:25:29.771588  377556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:25:29.773110  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 19:25:30.241924  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:25:30.247325  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:25:30.272032  377556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:25:30.272211  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 19:25:30.741667  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:25:30.747180  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:25:30.770965  377556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:25:30.772779  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 19:25:31.242608  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:25:31.247053  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:25:31.270929  377556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:25:31.272496  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 19:25:31.741865  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:25:31.747945  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:25:31.771182  377556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:25:31.772799  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 19:25:32.242305  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:25:32.247289  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:25:32.271368  377556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:25:32.272810  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 19:25:32.742985  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:25:32.747704  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:25:32.770966  377556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:25:32.772534  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 19:25:33.242144  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:25:33.247557  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:25:33.271488  377556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:25:33.273710  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 19:25:33.742404  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:25:33.746739  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:25:33.771008  377556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:25:33.773286  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 19:25:34.242343  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:25:34.247731  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:25:34.271880  377556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:25:34.272113  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 19:25:34.742347  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:25:34.843268  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 19:25:34.843543  377556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:25:34.843574  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:25:35.242461  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:25:35.247275  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:25:35.271534  377556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:25:35.271777  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 19:25:35.742103  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:25:35.747578  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:25:35.771383  377556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:25:35.771776  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 19:25:36.241482  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:25:36.246909  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:25:36.270603  377556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:25:36.272338  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 19:25:36.741986  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:25:36.747267  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:25:36.770863  377556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:25:36.771594  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 19:25:37.241596  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:25:37.247312  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:25:37.271262  377556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:25:37.271926  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 19:25:37.741591  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:25:37.747167  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:25:37.771555  377556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:25:37.772691  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 19:25:38.241521  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:25:38.247472  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:25:38.271780  377556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:25:38.272370  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 19:25:38.741445  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:25:38.747135  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:25:38.770954  377556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:25:38.772675  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 19:25:39.267032  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:25:39.267137  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:25:39.309717  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 19:25:39.309987  377556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:25:39.742657  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:25:39.747308  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:25:39.771015  377556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:25:39.771758  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 19:25:40.242026  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:25:40.247998  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:25:40.271117  377556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:25:40.272390  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 19:25:40.741926  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:25:40.747878  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:25:40.771319  377556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:25:40.772460  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 19:25:41.242173  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:25:41.247954  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:25:41.270958  377556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:25:41.272451  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 19:25:41.743500  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:25:41.747230  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:25:41.771415  377556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:25:41.772657  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 19:25:42.242232  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:25:42.247909  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:25:42.271252  377556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:25:42.272475  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 19:25:42.742011  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:25:42.749482  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:25:42.772293  377556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:25:42.772589  377556 kapi.go:107] duration metric: took 35.003435052s to wait for kubernetes.io/minikube-addons=registry ...
	I1217 19:25:43.243167  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:25:43.343644  377556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:25:43.343644  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:25:43.741858  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:25:43.748529  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:25:43.771905  377556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:25:44.242099  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:25:44.248459  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:25:44.271897  377556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:25:44.742745  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:25:44.748093  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:25:44.771042  377556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:25:45.242686  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:25:45.247865  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:25:45.271099  377556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:25:45.743368  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:25:45.748400  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:25:45.772703  377556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:25:46.242429  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:25:46.249309  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:25:46.271960  377556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:25:46.742448  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:25:46.747682  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:25:46.771866  377556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:25:47.241791  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:25:47.247124  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:25:47.270972  377556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:25:47.743032  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:25:47.748969  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:25:47.770936  377556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:25:48.242454  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:25:48.247350  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:25:48.271623  377556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:25:48.742455  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:25:48.747401  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:25:48.771735  377556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:25:49.242631  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:25:49.247697  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:25:49.271960  377556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:25:49.743069  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:25:49.748546  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:25:49.771989  377556 kapi.go:107] duration metric: took 42.00461088s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1217 19:25:50.241760  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:25:50.247332  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:25:50.741942  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:25:50.748005  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:25:51.242619  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:25:51.247205  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:25:51.741861  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:25:51.747658  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:25:52.241813  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:25:52.247640  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:25:52.742443  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:25:52.747171  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:25:53.242408  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:25:53.247256  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:25:53.742788  377556 kapi.go:107] duration metric: took 39.504610513s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1217 19:25:53.744624  377556 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-695107 cluster.
	I1217 19:25:53.746129  377556 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1217 19:25:53.747659  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:25:53.749192  377556 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1217 19:25:54.247529  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:25:54.748204  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:25:55.248107  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:25:55.748326  377556 kapi.go:107] duration metric: took 47.504172695s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1217 19:25:55.750186  377556 out.go:179] * Enabled addons: registry-creds, amd-gpu-device-plugin, ingress-dns, storage-provisioner, inspektor-gadget, cloud-spanner, metrics-server, default-storageclass, nvidia-device-plugin, yakd, storage-provisioner-rancher, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I1217 19:25:55.751380  377556 addons.go:530] duration metric: took 49.914381753s for enable addons: enabled=[registry-creds amd-gpu-device-plugin ingress-dns storage-provisioner inspektor-gadget cloud-spanner metrics-server default-storageclass nvidia-device-plugin yakd storage-provisioner-rancher volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I1217 19:25:55.751433  377556 start.go:247] waiting for cluster config update ...
	I1217 19:25:55.751459  377556 start.go:256] writing updated cluster config ...
	I1217 19:25:55.751734  377556 ssh_runner.go:195] Run: rm -f paused
	I1217 19:25:55.755746  377556 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1217 19:25:55.758781  377556 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-gqcjx" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 19:25:55.763040  377556 pod_ready.go:94] pod "coredns-66bc5c9577-gqcjx" is "Ready"
	I1217 19:25:55.763065  377556 pod_ready.go:86] duration metric: took 4.262868ms for pod "coredns-66bc5c9577-gqcjx" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 19:25:55.765035  377556 pod_ready.go:83] waiting for pod "etcd-addons-695107" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 19:25:55.768637  377556 pod_ready.go:94] pod "etcd-addons-695107" is "Ready"
	I1217 19:25:55.768658  377556 pod_ready.go:86] duration metric: took 3.601877ms for pod "etcd-addons-695107" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 19:25:55.770612  377556 pod_ready.go:83] waiting for pod "kube-apiserver-addons-695107" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 19:25:55.774189  377556 pod_ready.go:94] pod "kube-apiserver-addons-695107" is "Ready"
	I1217 19:25:55.774208  377556 pod_ready.go:86] duration metric: took 3.576591ms for pod "kube-apiserver-addons-695107" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 19:25:55.775794  377556 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-695107" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 19:25:56.160327  377556 pod_ready.go:94] pod "kube-controller-manager-addons-695107" is "Ready"
	I1217 19:25:56.160361  377556 pod_ready.go:86] duration metric: took 384.547978ms for pod "kube-controller-manager-addons-695107" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 19:25:56.359891  377556 pod_ready.go:83] waiting for pod "kube-proxy-fqlbd" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 19:25:56.759880  377556 pod_ready.go:94] pod "kube-proxy-fqlbd" is "Ready"
	I1217 19:25:56.759910  377556 pod_ready.go:86] duration metric: took 399.963561ms for pod "kube-proxy-fqlbd" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 19:25:56.959998  377556 pod_ready.go:83] waiting for pod "kube-scheduler-addons-695107" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 19:25:57.360303  377556 pod_ready.go:94] pod "kube-scheduler-addons-695107" is "Ready"
	I1217 19:25:57.360334  377556 pod_ready.go:86] duration metric: took 400.303802ms for pod "kube-scheduler-addons-695107" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 19:25:57.360346  377556 pod_ready.go:40] duration metric: took 1.604568997s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1217 19:25:57.407405  377556 start.go:625] kubectl: 1.35.0, cluster: 1.34.3 (minor skew: 1)
	I1217 19:25:57.410188  377556 out.go:179] * Done! kubectl is now configured to use "addons-695107" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 17 19:28:35 addons-695107 crio[774]: time="2025-12-17T19:28:35.174095219Z" level=info msg="Running pod sandbox: default/hello-world-app-5d498dc89-xfrgf/POD" id=1734a416-6121-4cef-9a93-dad46b2a0149 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 17 19:28:35 addons-695107 crio[774]: time="2025-12-17T19:28:35.174188819Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 19:28:35 addons-695107 crio[774]: time="2025-12-17T19:28:35.182965101Z" level=info msg="Got pod network &{Name:hello-world-app-5d498dc89-xfrgf Namespace:default ID:dff1a7c2b955ea38c17210216aa6ddc88eac23277dc62afa6f0346f32ded713a UID:ceffe665-8ad8-4ec3-bb14-56ed4fa34875 NetNS:/var/run/netns/17d02ae1-7a1c-4de8-a536-82bea4c98916 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000288060}] Aliases:map[]}"
	Dec 17 19:28:35 addons-695107 crio[774]: time="2025-12-17T19:28:35.183000904Z" level=info msg="Adding pod default_hello-world-app-5d498dc89-xfrgf to CNI network \"kindnet\" (type=ptp)"
	Dec 17 19:28:35 addons-695107 crio[774]: time="2025-12-17T19:28:35.193440145Z" level=info msg="Got pod network &{Name:hello-world-app-5d498dc89-xfrgf Namespace:default ID:dff1a7c2b955ea38c17210216aa6ddc88eac23277dc62afa6f0346f32ded713a UID:ceffe665-8ad8-4ec3-bb14-56ed4fa34875 NetNS:/var/run/netns/17d02ae1-7a1c-4de8-a536-82bea4c98916 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000288060}] Aliases:map[]}"
	Dec 17 19:28:35 addons-695107 crio[774]: time="2025-12-17T19:28:35.19355898Z" level=info msg="Checking pod default_hello-world-app-5d498dc89-xfrgf for CNI network kindnet (type=ptp)"
	Dec 17 19:28:35 addons-695107 crio[774]: time="2025-12-17T19:28:35.194537643Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 17 19:28:35 addons-695107 crio[774]: time="2025-12-17T19:28:35.195426265Z" level=info msg="Ran pod sandbox dff1a7c2b955ea38c17210216aa6ddc88eac23277dc62afa6f0346f32ded713a with infra container: default/hello-world-app-5d498dc89-xfrgf/POD" id=1734a416-6121-4cef-9a93-dad46b2a0149 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 17 19:28:35 addons-695107 crio[774]: time="2025-12-17T19:28:35.196716925Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=cbdb37dc-5b07-4baf-bf73-4f0a58c3f821 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 19:28:35 addons-695107 crio[774]: time="2025-12-17T19:28:35.196866551Z" level=info msg="Image docker.io/kicbase/echo-server:1.0 not found" id=cbdb37dc-5b07-4baf-bf73-4f0a58c3f821 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 19:28:35 addons-695107 crio[774]: time="2025-12-17T19:28:35.196916017Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:1.0 found" id=cbdb37dc-5b07-4baf-bf73-4f0a58c3f821 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 19:28:35 addons-695107 crio[774]: time="2025-12-17T19:28:35.197638246Z" level=info msg="Pulling image: docker.io/kicbase/echo-server:1.0" id=5f671c1d-a4de-40ea-a461-f5ce24db50dc name=/runtime.v1.ImageService/PullImage
	Dec 17 19:28:35 addons-695107 crio[774]: time="2025-12-17T19:28:35.203246378Z" level=info msg="Trying to access \"docker.io/kicbase/echo-server:1.0\""
	Dec 17 19:28:35 addons-695107 crio[774]: time="2025-12-17T19:28:35.647021783Z" level=info msg="Pulled image: docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86" id=5f671c1d-a4de-40ea-a461-f5ce24db50dc name=/runtime.v1.ImageService/PullImage
	Dec 17 19:28:35 addons-695107 crio[774]: time="2025-12-17T19:28:35.647634597Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=69e1a160-9b26-4de5-b7ae-a4bb9408a9ad name=/runtime.v1.ImageService/ImageStatus
	Dec 17 19:28:35 addons-695107 crio[774]: time="2025-12-17T19:28:35.649019761Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=725a4c7a-fab0-4e7b-b063-634eab266bdd name=/runtime.v1.ImageService/ImageStatus
	Dec 17 19:28:35 addons-695107 crio[774]: time="2025-12-17T19:28:35.653056098Z" level=info msg="Creating container: default/hello-world-app-5d498dc89-xfrgf/hello-world-app" id=fa11f753-619b-4441-8bb4-27217a9af762 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 17 19:28:35 addons-695107 crio[774]: time="2025-12-17T19:28:35.653197005Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 19:28:35 addons-695107 crio[774]: time="2025-12-17T19:28:35.659302474Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 19:28:35 addons-695107 crio[774]: time="2025-12-17T19:28:35.659526755Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/beae6b8fe58913563f8348ad8ddff4ab6d9fb6d14e9c40712df9b4f29837c921/merged/etc/passwd: no such file or directory"
	Dec 17 19:28:35 addons-695107 crio[774]: time="2025-12-17T19:28:35.65955957Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/beae6b8fe58913563f8348ad8ddff4ab6d9fb6d14e9c40712df9b4f29837c921/merged/etc/group: no such file or directory"
	Dec 17 19:28:35 addons-695107 crio[774]: time="2025-12-17T19:28:35.659864969Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 19:28:35 addons-695107 crio[774]: time="2025-12-17T19:28:35.690630958Z" level=info msg="Created container 62c40570c0cb6e2f173fa67960340c270dc55b2d968eaa5d9b564dcb0aa3ed3b: default/hello-world-app-5d498dc89-xfrgf/hello-world-app" id=fa11f753-619b-4441-8bb4-27217a9af762 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 17 19:28:35 addons-695107 crio[774]: time="2025-12-17T19:28:35.691308806Z" level=info msg="Starting container: 62c40570c0cb6e2f173fa67960340c270dc55b2d968eaa5d9b564dcb0aa3ed3b" id=a9ddfbe6-9cfd-4c9d-9cd0-703d023d7da1 name=/runtime.v1.RuntimeService/StartContainer
	Dec 17 19:28:35 addons-695107 crio[774]: time="2025-12-17T19:28:35.693618965Z" level=info msg="Started container" PID=9565 containerID=62c40570c0cb6e2f173fa67960340c270dc55b2d968eaa5d9b564dcb0aa3ed3b description=default/hello-world-app-5d498dc89-xfrgf/hello-world-app id=a9ddfbe6-9cfd-4c9d-9cd0-703d023d7da1 name=/runtime.v1.RuntimeService/StartContainer sandboxID=dff1a7c2b955ea38c17210216aa6ddc88eac23277dc62afa6f0346f32ded713a
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED                  STATE               NAME                                     ATTEMPT             POD ID              POD                                         NAMESPACE
	62c40570c0cb6       docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86                                        Less than a second ago   Running             hello-world-app                          0                   dff1a7c2b955e       hello-world-app-5d498dc89-xfrgf             default
	e83d103607808       docker.io/upmcenterprises/registry-creds@sha256:93a633d4f2b76a1c66bf19c664dbddc56093a543de6d54320f19f585ccd7d605                             2 minutes ago            Running             registry-creds                           0                   262f364b23823       registry-creds-764b6fb674-lglwq             kube-system
	c2636b9d05a98       public.ecr.aws/nginx/nginx@sha256:ec57271c43784c07301ebcc4bf37d6011b9b9d661d0cf229f2aa199e78a7312c                                           2 minutes ago            Running             nginx                                    0                   4c80afc6b0b90       nginx                                       default
	6ceebc8cecc61       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998                                          2 minutes ago            Running             busybox                                  0                   1ba8d993fc0cf       busybox                                     default
	05e7c087fc88a       registry.k8s.io/sig-storage/csi-snapshotter@sha256:d844cb1faeb4ecf44bae6aea370c9c6128a87e665e40370021427d79a8819ee5                          2 minutes ago            Running             csi-snapshotter                          0                   a931e3408d3e6       csi-hostpathplugin-j4557                    kube-system
	030ee45fef382       registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7                          2 minutes ago            Running             csi-provisioner                          0                   a931e3408d3e6       csi-hostpathplugin-j4557                    kube-system
	1bf59a626763d       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:441f351b4520c228d29ba8c02a438d9ba971dafbbba5c91eaf882b1528797fb8                                 2 minutes ago            Running             gcp-auth                                 0                   68281e22beef0       gcp-auth-78565c9fb4-47zbj                   gcp-auth
	e582a6b346e42       registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6                            2 minutes ago            Running             liveness-probe                           0                   a931e3408d3e6       csi-hostpathplugin-j4557                    kube-system
	6f1389fbed5a8       registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11                           2 minutes ago            Running             hostpath                                 0                   a931e3408d3e6       csi-hostpathplugin-j4557                    kube-system
	bb406a59b4704       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc                2 minutes ago            Running             node-driver-registrar                    0                   a931e3408d3e6       csi-hostpathplugin-j4557                    kube-system
	346a00d466786       registry.k8s.io/ingress-nginx/controller@sha256:d552aeecf01939bd11bdc4fa57ce7437d42651194a61edcd6b7aea44b9e74cad                             2 minutes ago            Running             controller                               0                   19f1fe8ebf827       ingress-nginx-controller-85d4c799dd-8mcfr   ingress-nginx
	aba3d9ac9ad0f       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:ea428be7b01d41418fca4d91ae3dff6b037bdc0d42757e7ad392a38536488a1a                            2 minutes ago            Running             gadget                                   0                   a04bb218fcebf       gadget-7dc2q                                gadget
	7927a0e1520a1       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                                     2 minutes ago            Running             amd-gpu-device-plugin                    0                   8d0e5b5714e9f       amd-gpu-device-plugin-xl62h                 kube-system
	4fd8c32f1f75b       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864   2 minutes ago            Running             csi-external-health-monitor-controller   0                   a931e3408d3e6       csi-hostpathplugin-j4557                    kube-system
	3e0c0283ddfb5       gcr.io/k8s-minikube/kube-registry-proxy@sha256:8f72a79b63ca56074435e82b87fca2642a8117e60be313d3586dbe2bfff11cac                              2 minutes ago            Running             registry-proxy                           0                   72c17c4898d8e       registry-proxy-8dlbt                        kube-system
	1309939d3b4da       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      2 minutes ago            Running             volume-snapshot-controller               0                   f89451a363726       snapshot-controller-7d9fbc56b8-fgm6r        kube-system
	8f0c2abe1917b       nvcr.io/nvidia/k8s-device-plugin@sha256:c3c1a099015d1810c249ba294beaad656ce0354f7e8a77803dacabe60a4f8c9f                                     2 minutes ago            Running             nvidia-device-plugin-ctr                 0                   37af11647fb9c       nvidia-device-plugin-daemonset-5hdv7        kube-system
	801db4b070e91       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      3 minutes ago            Running             volume-snapshot-controller               0                   57d29a23d4915       snapshot-controller-7d9fbc56b8-pvnhq        kube-system
	c7eea19f4d49e       registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8                              3 minutes ago            Running             csi-resizer                              0                   5b2c9baa92816       csi-hostpath-resizer-0                      kube-system
	51a71566b557a       registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0                             3 minutes ago            Running             csi-attacher                             0                   b62c025e344f6       csi-hostpath-attacher-0                     kube-system
	48b0e3db6f0aa       gcr.io/cloud-spanner-emulator/emulator@sha256:22a4d5b0f97bd0c2ee20da342493c5a60e40b4d62ec20c174cb32ff4ee1f65bf                               3 minutes ago            Running             cloud-spanner-emulator                   0                   ce2ca032f5616       cloud-spanner-emulator-5bdddb765-kzhtq      default
	f513241821060       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:e2d8d9e1553c1ac5f9f41bc34d38d1eda519ed77a3106b036c43b6667dad19a9                   3 minutes ago            Exited              patch                                    0                   86551c7f8f2af       ingress-nginx-admission-patch-6bdmz         ingress-nginx
	8cf6f22d4cee1       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:e2d8d9e1553c1ac5f9f41bc34d38d1eda519ed77a3106b036c43b6667dad19a9                   3 minutes ago            Exited              create                                   0                   786d7b47bd169       ingress-nginx-admission-create-rz9rh        ingress-nginx
	6368372bba7d6       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef                             3 minutes ago            Running             local-path-provisioner                   0                   0379ce408a490       local-path-provisioner-648f6765c9-26mcp     local-path-storage
	a485e9f994ff9       registry.k8s.io/metrics-server/metrics-server@sha256:5dd31abb8093690d9624a53277a00d2257e7e57e6766be3f9f54cf9f54cddbc1                        3 minutes ago            Running             metrics-server                           0                   714ab275b341d       metrics-server-85b7d694d7-tqbbx             kube-system
	04f733eceac24       docker.io/library/registry@sha256:f57ffd2bb01704b6082396158e77ca6d1112bc6fe32315c322864de804750d8a                                           3 minutes ago            Running             registry                                 0                   41c8193fe9db5       registry-6b586f9694-2jvdr                   kube-system
	c3f541802ca32       docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7                               3 minutes ago            Running             minikube-ingress-dns                     0                   cc4a1df792692       kube-ingress-dns-minikube                   kube-system
	fbf994d990fd1       docker.io/marcnuri/yakd@sha256:ef51bed688eb0feab1405f97b7286dfe1da3c61e5a189ce4ae34a90c9f9cf8aa                                              3 minutes ago            Running             yakd                                     0                   eb38fc22772c6       yakd-dashboard-6654c87f9b-mcjdv             yakd-dashboard
	e3aca076801c7       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                                             3 minutes ago            Running             storage-provisioner                      0                   1ad4f96fbe212       storage-provisioner                         kube-system
	f32dab99d943e       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                                             3 minutes ago            Running             coredns                                  0                   a7e7a5180fc84       coredns-66bc5c9577-gqcjx                    kube-system
	b68b1b351d2b0       docker.io/kindest/kindnetd@sha256:7c22558dc06a570d46ea6e8a73b23cdc754eb81f7c08d3441a3171ad359ffc27                                           3 minutes ago            Running             kindnet-cni                              0                   737cc0abe5ef6       kindnet-dkw9t                               kube-system
	bc8813162646d       36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691                                                                             3 minutes ago            Running             kube-proxy                               0                   9f87571df8090       kube-proxy-fqlbd                            kube-system
	bea3125cf2914       aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c                                                                             3 minutes ago            Running             kube-apiserver                           0                   0fb0ad136d8a3       kube-apiserver-addons-695107                kube-system
	5875440c2f308       aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78                                                                             3 minutes ago            Running             kube-scheduler                           0                   b127cca34e4a1       kube-scheduler-addons-695107                kube-system
	87468d7032ea6       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                                                             3 minutes ago            Running             etcd                                     0                   1ff3a1be0c30a       etcd-addons-695107                          kube-system
	fd7cf6d64d69e       5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942                                                                             3 minutes ago            Running             kube-controller-manager                  0                   85d669b40b72a       kube-controller-manager-addons-695107       kube-system
	
	
	==> coredns [f32dab99d943eec56bf9918ed2f6b53e96fd877cfbbf5192cf7d857f1b776f8e] <==
	[INFO] 10.244.0.22:40935 - 62714 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000127924s
	[INFO] 10.244.0.22:48515 - 50585 "AAAA IN storage.googleapis.com.us-east4-a.c.k8s-minikube.internal. udp 86 false 1232" NXDOMAIN qr,rd,ra 182 0.008736479s
	[INFO] 10.244.0.22:40378 - 21566 "A IN storage.googleapis.com.us-east4-a.c.k8s-minikube.internal. udp 86 false 1232" NXDOMAIN qr,rd,ra 182 0.010368846s
	[INFO] 10.244.0.22:51793 - 49561 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.007977127s
	[INFO] 10.244.0.22:53031 - 591 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.008559816s
	[INFO] 10.244.0.22:39894 - 12422 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.005723779s
	[INFO] 10.244.0.22:44189 - 61358 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.006756026s
	[INFO] 10.244.0.22:51634 - 450 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 534 0.002162985s
	[INFO] 10.244.0.22:57480 - 58787 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.002659393s
	[INFO] 10.244.0.25:43853 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000224698s
	[INFO] 10.244.0.25:42453 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000205275s
	[INFO] 10.244.0.27:60384 - 41622 "A IN accounts.google.com.kube-system.svc.cluster.local. udp 67 false 512" NXDOMAIN qr,aa,rd 160 0.000221496s
	[INFO] 10.244.0.27:32933 - 67 "AAAA IN accounts.google.com.kube-system.svc.cluster.local. udp 67 false 512" NXDOMAIN qr,aa,rd 160 0.00027025s
	[INFO] 10.244.0.27:37813 - 20208 "A IN accounts.google.com.svc.cluster.local. udp 55 false 512" NXDOMAIN qr,aa,rd 148 0.000134784s
	[INFO] 10.244.0.27:37510 - 17028 "AAAA IN accounts.google.com.svc.cluster.local. udp 55 false 512" NXDOMAIN qr,aa,rd 148 0.000241395s
	[INFO] 10.244.0.27:39866 - 29132 "AAAA IN accounts.google.com.cluster.local. udp 51 false 512" NXDOMAIN qr,aa,rd 144 0.000082434s
	[INFO] 10.244.0.27:52337 - 18337 "A IN accounts.google.com.cluster.local. udp 51 false 512" NXDOMAIN qr,aa,rd 144 0.000175333s
	[INFO] 10.244.0.27:52774 - 18402 "AAAA IN accounts.google.com.us-east4-a.c.k8s-minikube.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 179 0.006575882s
	[INFO] 10.244.0.27:40252 - 51187 "A IN accounts.google.com.us-east4-a.c.k8s-minikube.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 179 0.006677773s
	[INFO] 10.244.0.27:41457 - 35023 "AAAA IN accounts.google.com.c.k8s-minikube.internal. udp 61 false 512" NXDOMAIN qr,rd,ra 166 0.005131833s
	[INFO] 10.244.0.27:54253 - 38373 "A IN accounts.google.com.c.k8s-minikube.internal. udp 61 false 512" NXDOMAIN qr,rd,ra 166 0.006978477s
	[INFO] 10.244.0.27:57343 - 58543 "AAAA IN accounts.google.com.google.internal. udp 53 false 512" NXDOMAIN qr,rd,ra 158 0.005503279s
	[INFO] 10.244.0.27:57569 - 39045 "A IN accounts.google.com.google.internal. udp 53 false 512" NXDOMAIN qr,rd,ra 158 0.006541064s
	[INFO] 10.244.0.27:46718 - 33704 "AAAA IN accounts.google.com. udp 37 false 512" NOERROR qr,rd,ra 84 0.001766088s
	[INFO] 10.244.0.27:41060 - 29081 "A IN accounts.google.com. udp 37 false 512" NOERROR qr,rd,ra 72 0.002539059s
	
	
	==> describe nodes <==
	Name:               addons-695107
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-695107
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2e96f676eb7e96389e85fe0658a4ede4c4ba6924
	                    minikube.k8s.io/name=addons-695107
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_17T19_25_01_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-695107
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-695107"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Dec 2025 19:24:57 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-695107
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Dec 2025 19:28:34 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Dec 2025 19:26:32 +0000   Wed, 17 Dec 2025 19:24:56 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Dec 2025 19:26:32 +0000   Wed, 17 Dec 2025 19:24:56 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Dec 2025 19:26:32 +0000   Wed, 17 Dec 2025 19:24:56 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Dec 2025 19:26:32 +0000   Wed, 17 Dec 2025 19:25:18 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-695107
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 99cc213c06a11cdf07b2a4d26942818a
	  System UUID:                e217694c-a589-401c-9719-5d685e266755
	  Boot ID:                    832664c8-407a-4bff-a432-3bbc3f20421e
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.3
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (29 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m39s
	  default                     cloud-spanner-emulator-5bdddb765-kzhtq       0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m30s
	  default                     hello-world-app-5d498dc89-xfrgf              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  default                     nginx                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m24s
	  gadget                      gadget-7dc2q                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m29s
	  gcp-auth                    gcp-auth-78565c9fb4-47zbj                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m22s
	  ingress-nginx               ingress-nginx-controller-85d4c799dd-8mcfr    100m (1%)     0 (0%)      90Mi (0%)        0 (0%)         3m29s
	  kube-system                 amd-gpu-device-plugin-xl62h                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m18s
	  kube-system                 coredns-66bc5c9577-gqcjx                     100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     3m31s
	  kube-system                 csi-hostpath-attacher-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m29s
	  kube-system                 csi-hostpath-resizer-0                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m28s
	  kube-system                 csi-hostpathplugin-j4557                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m18s
	  kube-system                 etcd-addons-695107                           100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         3m37s
	  kube-system                 kindnet-dkw9t                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      3m31s
	  kube-system                 kube-apiserver-addons-695107                 250m (3%)     0 (0%)      0 (0%)           0 (0%)         3m36s
	  kube-system                 kube-controller-manager-addons-695107        200m (2%)     0 (0%)      0 (0%)           0 (0%)         3m36s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m29s
	  kube-system                 kube-proxy-fqlbd                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m31s
	  kube-system                 kube-scheduler-addons-695107                 100m (1%)     0 (0%)      0 (0%)           0 (0%)         3m36s
	  kube-system                 metrics-server-85b7d694d7-tqbbx              100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         3m29s
	  kube-system                 nvidia-device-plugin-daemonset-5hdv7         0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m18s
	  kube-system                 registry-6b586f9694-2jvdr                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m29s
	  kube-system                 registry-creds-764b6fb674-lglwq              0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m30s
	  kube-system                 registry-proxy-8dlbt                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m18s
	  kube-system                 snapshot-controller-7d9fbc56b8-fgm6r         0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m28s
	  kube-system                 snapshot-controller-7d9fbc56b8-pvnhq         0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m28s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m29s
	  local-path-storage          local-path-provisioner-648f6765c9-26mcp      0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m29s
	  yakd-dashboard              yakd-dashboard-6654c87f9b-mcjdv              0 (0%)        0 (0%)      128Mi (0%)       256Mi (0%)     3m29s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (13%)  100m (1%)
	  memory             638Mi (1%)   476Mi (1%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 3m29s  kube-proxy       
	  Normal  Starting                 3m36s  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m36s  kubelet          Node addons-695107 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m36s  kubelet          Node addons-695107 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m36s  kubelet          Node addons-695107 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           3m32s  node-controller  Node addons-695107 event: Registered Node addons-695107 in Controller
	  Normal  NodeReady                3m18s  kubelet          Node addons-695107 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 02 bf cf fd 8a f3 08 06
	[  +0.000372] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 46 d7 50 f9 50 96 08 06
	[Dec17 19:26] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000011] ll header: 00000000: 12 b8 6e 1b fb 93 de a2 46 23 bd 1e 08 00
	[  +1.015318] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 12 b8 6e 1b fb 93 de a2 46 23 bd 1e 08 00
	[  +1.023837] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 12 b8 6e 1b fb 93 de a2 46 23 bd 1e 08 00
	[  +1.023872] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 12 b8 6e 1b fb 93 de a2 46 23 bd 1e 08 00
	[  +1.023881] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 12 b8 6e 1b fb 93 de a2 46 23 bd 1e 08 00
	[  +1.023899] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 12 b8 6e 1b fb 93 de a2 46 23 bd 1e 08 00
	[  +2.047807] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: 12 b8 6e 1b fb 93 de a2 46 23 bd 1e 08 00
	[  +4.031540] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000022] ll header: 00000000: 12 b8 6e 1b fb 93 de a2 46 23 bd 1e 08 00
	[  +8.319118] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: 12 b8 6e 1b fb 93 de a2 46 23 bd 1e 08 00
	[ +16.382218] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 12 b8 6e 1b fb 93 de a2 46 23 bd 1e 08 00
	[Dec17 19:27] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 12 b8 6e 1b fb 93 de a2 46 23 bd 1e 08 00
	
	
	==> etcd [87468d7032ea669744a3be9490a79472140a58976b8a3c756b65a43dbda2d50e] <==
	{"level":"warn","ts":"2025-12-17T19:24:57.114892Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35644","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T19:24:57.122124Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35668","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T19:24:57.128973Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35678","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T19:24:57.135456Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35698","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T19:24:57.149021Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35734","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T19:24:57.155227Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35740","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T19:24:57.162037Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35764","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T19:24:57.170126Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35784","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T19:24:57.190543Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35814","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T19:24:57.194327Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35830","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T19:24:57.201139Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35854","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T19:24:57.208481Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35878","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T19:24:57.263153Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35894","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T19:25:08.818923Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54168","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T19:25:08.825865Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54188","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T19:25:34.660968Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41682","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T19:25:34.670410Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41704","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T19:25:34.685394Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41716","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T19:25:34.693937Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41736","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-17T19:26:00.639105Z","caller":"traceutil/trace.go:172","msg":"trace[1689525748] transaction","detail":"{read_only:false; response_revision:1213; number_of_response:1; }","duration":"124.834485ms","start":"2025-12-17T19:26:00.514220Z","end":"2025-12-17T19:26:00.639055Z","steps":["trace[1689525748] 'process raft request'  (duration: 60.544661ms)","trace[1689525748] 'compare'  (duration: 64.194071ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-17T19:26:00.639300Z","caller":"traceutil/trace.go:172","msg":"trace[237250783] transaction","detail":"{read_only:false; response_revision:1214; number_of_response:1; }","duration":"124.876352ms","start":"2025-12-17T19:26:00.514409Z","end":"2025-12-17T19:26:00.639285Z","steps":["trace[237250783] 'process raft request'  (duration: 124.656588ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-17T19:26:00.639321Z","caller":"traceutil/trace.go:172","msg":"trace[274642689] transaction","detail":"{read_only:false; response_revision:1218; number_of_response:1; }","duration":"121.211403ms","start":"2025-12-17T19:26:00.518098Z","end":"2025-12-17T19:26:00.639309Z","steps":["trace[274642689] 'process raft request'  (duration: 121.173341ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-17T19:26:00.639488Z","caller":"traceutil/trace.go:172","msg":"trace[1760482284] transaction","detail":"{read_only:false; response_revision:1215; number_of_response:1; }","duration":"125.053728ms","start":"2025-12-17T19:26:00.514421Z","end":"2025-12-17T19:26:00.639475Z","steps":["trace[1760482284] 'process raft request'  (duration: 124.722702ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-17T19:26:00.639359Z","caller":"traceutil/trace.go:172","msg":"trace[921640965] transaction","detail":"{read_only:false; response_revision:1217; number_of_response:1; }","duration":"123.333842ms","start":"2025-12-17T19:26:00.516017Z","end":"2025-12-17T19:26:00.639351Z","steps":["trace[921640965] 'process raft request'  (duration: 123.214702ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-17T19:26:00.639575Z","caller":"traceutil/trace.go:172","msg":"trace[813321175] transaction","detail":"{read_only:false; response_revision:1216; number_of_response:1; }","duration":"125.145965ms","start":"2025-12-17T19:26:00.514419Z","end":"2025-12-17T19:26:00.639565Z","steps":["trace[813321175] 'process raft request'  (duration: 124.772555ms)"],"step_count":1}
	
	
	==> gcp-auth [1bf59a626763deba3e4128c122638ed60fa800d27b9db4eca8e6bd2a3a6bb2ff] <==
	2025/12/17 19:25:52 GCP Auth Webhook started!
	2025/12/17 19:25:57 Ready to marshal response ...
	2025/12/17 19:25:57 Ready to write response ...
	2025/12/17 19:25:57 Ready to marshal response ...
	2025/12/17 19:25:57 Ready to write response ...
	2025/12/17 19:25:57 Ready to marshal response ...
	2025/12/17 19:25:57 Ready to write response ...
	2025/12/17 19:26:12 Ready to marshal response ...
	2025/12/17 19:26:12 Ready to write response ...
	2025/12/17 19:26:16 Ready to marshal response ...
	2025/12/17 19:26:16 Ready to write response ...
	2025/12/17 19:26:18 Ready to marshal response ...
	2025/12/17 19:26:18 Ready to write response ...
	2025/12/17 19:26:18 Ready to marshal response ...
	2025/12/17 19:26:18 Ready to write response ...
	2025/12/17 19:26:27 Ready to marshal response ...
	2025/12/17 19:26:27 Ready to write response ...
	2025/12/17 19:26:30 Ready to marshal response ...
	2025/12/17 19:26:30 Ready to write response ...
	2025/12/17 19:26:48 Ready to marshal response ...
	2025/12/17 19:26:48 Ready to write response ...
	2025/12/17 19:28:34 Ready to marshal response ...
	2025/12/17 19:28:34 Ready to write response ...
	
	
	==> kernel <==
	 19:28:36 up  1:11,  0 user,  load average: 0.64, 1.87, 2.05
	Linux addons-695107 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [b68b1b351d2b0d7d4628fdbe0a6689c4e3150e140e9149ec00e8886c21c85388] <==
	I1217 19:26:28.580301       1 main.go:301] handling current node
	I1217 19:26:38.579609       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1217 19:26:38.579663       1 main.go:301] handling current node
	I1217 19:26:48.580060       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1217 19:26:48.580123       1 main.go:301] handling current node
	I1217 19:26:58.580317       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1217 19:26:58.580355       1 main.go:301] handling current node
	I1217 19:27:08.586236       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1217 19:27:08.586284       1 main.go:301] handling current node
	I1217 19:27:18.579235       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1217 19:27:18.579276       1 main.go:301] handling current node
	I1217 19:27:28.588336       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1217 19:27:28.588372       1 main.go:301] handling current node
	I1217 19:27:38.588465       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1217 19:27:38.588506       1 main.go:301] handling current node
	I1217 19:27:48.587146       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1217 19:27:48.587191       1 main.go:301] handling current node
	I1217 19:27:58.588342       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1217 19:27:58.588376       1 main.go:301] handling current node
	I1217 19:28:08.588489       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1217 19:28:08.588535       1 main.go:301] handling current node
	I1217 19:28:18.587000       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1217 19:28:18.587031       1 main.go:301] handling current node
	I1217 19:28:28.579244       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1217 19:28:28.579285       1 main.go:301] handling current node
	
	
	==> kube-apiserver [bea3125cf2914bd997ad7c9b382bc666af7c3ef97d39311b120cecf6bfd19b22] <==
	E1217 19:25:18.760835       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.101.148.239:443: connect: connection refused" logger="UnhandledError"
	W1217 19:25:18.782717       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.101.148.239:443: connect: connection refused
	E1217 19:25:18.782872       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.101.148.239:443: connect: connection refused" logger="UnhandledError"
	W1217 19:25:18.786248       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.101.148.239:443: connect: connection refused
	E1217 19:25:18.786351       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.101.148.239:443: connect: connection refused" logger="UnhandledError"
	W1217 19:25:30.353050       1 handler_proxy.go:99] no RequestInfo found in the context
	E1217 19:25:30.353149       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1217 19:25:30.353122       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.102.138.161:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.102.138.161:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.102.138.161:443: connect: connection refused" logger="UnhandledError"
	E1217 19:25:30.354596       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.102.138.161:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.102.138.161:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.102.138.161:443: connect: connection refused" logger="UnhandledError"
	E1217 19:25:30.360443       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.102.138.161:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.102.138.161:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.102.138.161:443: connect: connection refused" logger="UnhandledError"
	E1217 19:25:30.381468       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.102.138.161:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.102.138.161:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.102.138.161:443: connect: connection refused" logger="UnhandledError"
	I1217 19:25:30.450710       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W1217 19:25:34.660925       1 logging.go:55] [core] [Channel #267 SubChannel #268]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1217 19:25:34.670341       1 logging.go:55] [core] [Channel #271 SubChannel #272]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1217 19:25:34.685325       1 logging.go:55] [core] [Channel #275 SubChannel #276]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1217 19:25:34.693914       1 logging.go:55] [core] [Channel #279 SubChannel #280]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	E1217 19:26:06.093000       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:50062: use of closed network connection
	E1217 19:26:06.242238       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:50076: use of closed network connection
	I1217 19:26:12.046722       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I1217 19:26:12.262280       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.105.105.177"}
	I1217 19:26:37.118122       1 controller.go:667] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I1217 19:28:34.932028       1 alloc.go:328] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.109.109.111"}
	
	
	==> kube-controller-manager [fd7cf6d64d69e77f0f93c54b2f5c32210f59f02ec07dbd9708e6d7d40d2b4e33] <==
	I1217 19:25:04.641848       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1217 19:25:04.641947       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1217 19:25:04.641956       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1217 19:25:04.641992       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1217 19:25:04.642131       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1217 19:25:04.643455       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1217 19:25:04.645785       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1217 19:25:04.646988       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1217 19:25:04.647009       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1217 19:25:04.647044       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1217 19:25:04.647062       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1217 19:25:04.647134       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1217 19:25:04.649414       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1217 19:25:04.654634       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1217 19:25:04.654728       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1217 19:25:04.654836       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="addons-695107"
	I1217 19:25:04.654886       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1217 19:25:04.666791       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E1217 19:25:07.358478       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/metrics-server-85b7d694d7\" failed with pods \"metrics-server-85b7d694d7-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found" logger="UnhandledError"
	I1217 19:25:19.657634       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I1217 19:25:34.652784       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1217 19:25:34.652874       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1217 19:25:34.677066       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1217 19:25:34.753332       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1217 19:25:34.777835       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [bc8813162646db6787344c15bb78bf1f1a23063d72326a728b0a42dafc7c4d56] <==
	I1217 19:25:06.273168       1 server_linux.go:53] "Using iptables proxy"
	I1217 19:25:06.835714       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1217 19:25:06.953997       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1217 19:25:06.954036       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1217 19:25:06.954135       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1217 19:25:07.196067       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1217 19:25:07.196301       1 server_linux.go:132] "Using iptables Proxier"
	I1217 19:25:07.270703       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1217 19:25:07.273119       1 server.go:527] "Version info" version="v1.34.3"
	I1217 19:25:07.273214       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1217 19:25:07.276381       1 config.go:200] "Starting service config controller"
	I1217 19:25:07.276442       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1217 19:25:07.276953       1 config.go:403] "Starting serviceCIDR config controller"
	I1217 19:25:07.277012       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1217 19:25:07.277051       1 config.go:106] "Starting endpoint slice config controller"
	I1217 19:25:07.277058       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1217 19:25:07.277174       1 config.go:309] "Starting node config controller"
	I1217 19:25:07.277181       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1217 19:25:07.277188       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1217 19:25:07.376837       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1217 19:25:07.378237       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1217 19:25:07.378237       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [5875440c2f308ff9ae46bdeb21b7960b61f51fff5f745adf6f9deb63f35cfb16] <==
	E1217 19:24:57.662372       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1217 19:24:57.662463       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1217 19:24:57.662478       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1217 19:24:57.662483       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1217 19:24:57.662586       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1217 19:24:57.662597       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1217 19:24:57.662624       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1217 19:24:57.662663       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1217 19:24:57.662723       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1217 19:24:57.662717       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1217 19:24:57.662812       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1217 19:24:58.521384       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1217 19:24:58.522696       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1217 19:24:58.532914       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1217 19:24:58.546456       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1217 19:24:58.583159       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1217 19:24:58.591441       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1217 19:24:58.599729       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1217 19:24:58.604974       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1217 19:24:58.626039       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1217 19:24:58.635244       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1217 19:24:58.732029       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1217 19:24:58.801925       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1217 19:24:58.923605       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	I1217 19:25:00.358155       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 17 19:26:54 addons-695107 kubelet[1289]: I1217 19:26:54.382645    1289 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/035b6135-5124-4d49-8c90-14d512a9172f-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "035b6135-5124-4d49-8c90-14d512a9172f" (UID: "035b6135-5124-4d49-8c90-14d512a9172f"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGIDValue ""
	Dec 17 19:26:54 addons-695107 kubelet[1289]: I1217 19:26:54.382688    1289 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8xp2k\" (UniqueName: \"kubernetes.io/projected/035b6135-5124-4d49-8c90-14d512a9172f-kube-api-access-8xp2k\") pod \"035b6135-5124-4d49-8c90-14d512a9172f\" (UID: \"035b6135-5124-4d49-8c90-14d512a9172f\") "
	Dec 17 19:26:54 addons-695107 kubelet[1289]: I1217 19:26:54.382767    1289 reconciler_common.go:299] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/035b6135-5124-4d49-8c90-14d512a9172f-gcp-creds\") on node \"addons-695107\" DevicePath \"\""
	Dec 17 19:26:54 addons-695107 kubelet[1289]: I1217 19:26:54.385562    1289 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/hostpath.csi.k8s.io^5471f70c-db7e-11f0-88d9-a677ec6f3d2a" (OuterVolumeSpecName: "task-pv-storage") pod "035b6135-5124-4d49-8c90-14d512a9172f" (UID: "035b6135-5124-4d49-8c90-14d512a9172f"). InnerVolumeSpecName "pvc-719941b9-a97f-47b1-8dad-ec07217c378b". PluginName "kubernetes.io/csi", VolumeGIDValue ""
	Dec 17 19:26:54 addons-695107 kubelet[1289]: I1217 19:26:54.385592    1289 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/035b6135-5124-4d49-8c90-14d512a9172f-kube-api-access-8xp2k" (OuterVolumeSpecName: "kube-api-access-8xp2k") pod "035b6135-5124-4d49-8c90-14d512a9172f" (UID: "035b6135-5124-4d49-8c90-14d512a9172f"). InnerVolumeSpecName "kube-api-access-8xp2k". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Dec 17 19:26:54 addons-695107 kubelet[1289]: I1217 19:26:54.483838    1289 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8xp2k\" (UniqueName: \"kubernetes.io/projected/035b6135-5124-4d49-8c90-14d512a9172f-kube-api-access-8xp2k\") on node \"addons-695107\" DevicePath \"\""
	Dec 17 19:26:54 addons-695107 kubelet[1289]: I1217 19:26:54.483907    1289 reconciler_common.go:292] "operationExecutor.UnmountDevice started for volume \"pvc-719941b9-a97f-47b1-8dad-ec07217c378b\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^5471f70c-db7e-11f0-88d9-a677ec6f3d2a\") on node \"addons-695107\" "
	Dec 17 19:26:54 addons-695107 kubelet[1289]: I1217 19:26:54.488822    1289 operation_generator.go:895] UnmountDevice succeeded for volume "pvc-719941b9-a97f-47b1-8dad-ec07217c378b" (UniqueName: "kubernetes.io/csi/hostpath.csi.k8s.io^5471f70c-db7e-11f0-88d9-a677ec6f3d2a") on node "addons-695107"
	Dec 17 19:26:54 addons-695107 kubelet[1289]: I1217 19:26:54.584975    1289 reconciler_common.go:299] "Volume detached for volume \"pvc-719941b9-a97f-47b1-8dad-ec07217c378b\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^5471f70c-db7e-11f0-88d9-a677ec6f3d2a\") on node \"addons-695107\" DevicePath \"\""
	Dec 17 19:26:54 addons-695107 kubelet[1289]: I1217 19:26:54.742132    1289 scope.go:117] "RemoveContainer" containerID="31db198eed9bc11234686ef89a54cc98ff9d5df5545aaa37650146b9b06a6d2a"
	Dec 17 19:26:54 addons-695107 kubelet[1289]: I1217 19:26:54.751813    1289 scope.go:117] "RemoveContainer" containerID="31db198eed9bc11234686ef89a54cc98ff9d5df5545aaa37650146b9b06a6d2a"
	Dec 17 19:26:54 addons-695107 kubelet[1289]: E1217 19:26:54.752270    1289 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"31db198eed9bc11234686ef89a54cc98ff9d5df5545aaa37650146b9b06a6d2a\": container with ID starting with 31db198eed9bc11234686ef89a54cc98ff9d5df5545aaa37650146b9b06a6d2a not found: ID does not exist" containerID="31db198eed9bc11234686ef89a54cc98ff9d5df5545aaa37650146b9b06a6d2a"
	Dec 17 19:26:54 addons-695107 kubelet[1289]: I1217 19:26:54.752315    1289 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"31db198eed9bc11234686ef89a54cc98ff9d5df5545aaa37650146b9b06a6d2a"} err="failed to get container status \"31db198eed9bc11234686ef89a54cc98ff9d5df5545aaa37650146b9b06a6d2a\": rpc error: code = NotFound desc = could not find container \"31db198eed9bc11234686ef89a54cc98ff9d5df5545aaa37650146b9b06a6d2a\": container with ID starting with 31db198eed9bc11234686ef89a54cc98ff9d5df5545aaa37650146b9b06a6d2a not found: ID does not exist"
	Dec 17 19:26:56 addons-695107 kubelet[1289]: I1217 19:26:56.166067    1289 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="035b6135-5124-4d49-8c90-14d512a9172f" path="/var/lib/kubelet/pods/035b6135-5124-4d49-8c90-14d512a9172f/volumes"
	Dec 17 19:27:00 addons-695107 kubelet[1289]: I1217 19:27:00.154434    1289 scope.go:117] "RemoveContainer" containerID="7fe4735b203714444c97ee50cccc142ccdc87a28a31a88395d8f81989ec11f65"
	Dec 17 19:27:00 addons-695107 kubelet[1289]: I1217 19:27:00.165018    1289 scope.go:117] "RemoveContainer" containerID="ce8f9935e453ceec8ce5bb72722412337b32cef8246e37ea85fb06aedb09bf59"
	Dec 17 19:27:00 addons-695107 kubelet[1289]: I1217 19:27:00.173976    1289 scope.go:117] "RemoveContainer" containerID="a34979fddc50413ea77fc203bb3806586d0fdd02f6ef744f3dfed8dc3701dcfe"
	Dec 17 19:27:00 addons-695107 kubelet[1289]: I1217 19:27:00.182948    1289 scope.go:117] "RemoveContainer" containerID="88de9cb787252838e608d3b618a0ede8a8f3bd5cd3c1670144edba5257c5862b"
	Dec 17 19:27:09 addons-695107 kubelet[1289]: I1217 19:27:09.163579    1289 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-8dlbt" secret="" err="secret \"gcp-auth\" not found"
	Dec 17 19:27:57 addons-695107 kubelet[1289]: I1217 19:27:57.162621    1289 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-xl62h" secret="" err="secret \"gcp-auth\" not found"
	Dec 17 19:28:12 addons-695107 kubelet[1289]: I1217 19:28:12.163463    1289 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-5hdv7" secret="" err="secret \"gcp-auth\" not found"
	Dec 17 19:28:19 addons-695107 kubelet[1289]: I1217 19:28:19.163187    1289 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-8dlbt" secret="" err="secret \"gcp-auth\" not found"
	Dec 17 19:28:34 addons-695107 kubelet[1289]: I1217 19:28:34.949945    1289 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7lphv\" (UniqueName: \"kubernetes.io/projected/ceffe665-8ad8-4ec3-bb14-56ed4fa34875-kube-api-access-7lphv\") pod \"hello-world-app-5d498dc89-xfrgf\" (UID: \"ceffe665-8ad8-4ec3-bb14-56ed4fa34875\") " pod="default/hello-world-app-5d498dc89-xfrgf"
	Dec 17 19:28:34 addons-695107 kubelet[1289]: I1217 19:28:34.950007    1289 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/ceffe665-8ad8-4ec3-bb14-56ed4fa34875-gcp-creds\") pod \"hello-world-app-5d498dc89-xfrgf\" (UID: \"ceffe665-8ad8-4ec3-bb14-56ed4fa34875\") " pod="default/hello-world-app-5d498dc89-xfrgf"
	Dec 17 19:28:36 addons-695107 kubelet[1289]: I1217 19:28:36.146366    1289 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/hello-world-app-5d498dc89-xfrgf" podStartSLOduration=1.695173332 podStartE2EDuration="2.146341366s" podCreationTimestamp="2025-12-17 19:28:34 +0000 UTC" firstStartedPulling="2025-12-17 19:28:35.197243436 +0000 UTC m=+215.122782022" lastFinishedPulling="2025-12-17 19:28:35.648411475 +0000 UTC m=+215.573950056" observedRunningTime="2025-12-17 19:28:36.14593662 +0000 UTC m=+216.071475225" watchObservedRunningTime="2025-12-17 19:28:36.146341366 +0000 UTC m=+216.071879969"
	
	
	==> storage-provisioner [e3aca076801c71c61c7d166207a81c454eca7b4579247b6da815893233243960] <==
	W1217 19:28:12.068701       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 19:28:14.071507       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 19:28:14.075720       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 19:28:16.079345       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 19:28:16.084139       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 19:28:18.087344       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 19:28:18.091684       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 19:28:20.094574       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 19:28:20.098596       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 19:28:22.101561       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 19:28:22.105736       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 19:28:24.108429       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 19:28:24.113817       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 19:28:26.116653       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 19:28:26.120504       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 19:28:28.123859       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 19:28:28.127726       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 19:28:30.131434       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 19:28:30.135621       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 19:28:32.139330       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 19:28:32.143414       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 19:28:34.146505       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 19:28:34.151285       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 19:28:36.154919       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 19:28:36.158952       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-695107 -n addons-695107
helpers_test.go:270: (dbg) Run:  kubectl --context addons-695107 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:281: non-running pods: ingress-nginx-admission-create-rz9rh ingress-nginx-admission-patch-6bdmz
helpers_test.go:283: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:286: (dbg) Run:  kubectl --context addons-695107 describe pod ingress-nginx-admission-create-rz9rh ingress-nginx-admission-patch-6bdmz
helpers_test.go:286: (dbg) Non-zero exit: kubectl --context addons-695107 describe pod ingress-nginx-admission-create-rz9rh ingress-nginx-admission-patch-6bdmz: exit status 1 (58.182687ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-rz9rh" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-6bdmz" not found

                                                
                                                
** /stderr **
helpers_test.go:288: kubectl --context addons-695107 describe pod ingress-nginx-admission-create-rz9rh ingress-nginx-admission-patch-6bdmz: exit status 1
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-695107 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-695107 addons disable ingress-dns --alsologtostderr -v=1: exit status 11 (254.330875ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1217 19:28:37.341247  391657 out.go:360] Setting OutFile to fd 1 ...
	I1217 19:28:37.341503  391657 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 19:28:37.341512  391657 out.go:374] Setting ErrFile to fd 2...
	I1217 19:28:37.341516  391657 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 19:28:37.341730  391657 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22186-372245/.minikube/bin
	I1217 19:28:37.342020  391657 mustload.go:66] Loading cluster: addons-695107
	I1217 19:28:37.342429  391657 config.go:182] Loaded profile config "addons-695107": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 19:28:37.342450  391657 addons.go:622] checking whether the cluster is paused
	I1217 19:28:37.342532  391657 config.go:182] Loaded profile config "addons-695107": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 19:28:37.342545  391657 host.go:66] Checking if "addons-695107" exists ...
	I1217 19:28:37.342956  391657 cli_runner.go:164] Run: docker container inspect addons-695107 --format={{.State.Status}}
	I1217 19:28:37.362682  391657 ssh_runner.go:195] Run: systemctl --version
	I1217 19:28:37.362767  391657 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-695107
	I1217 19:28:37.380258  391657 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22186-372245/.minikube/machines/addons-695107/id_rsa Username:docker}
	I1217 19:28:37.480916  391657 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1217 19:28:37.481053  391657 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 19:28:37.511893  391657 cri.go:89] found id: "e83d1036078086aaca80c341c18864a4fa25b95af7b2bca016c4f75ad06315fa"
	I1217 19:28:37.511917  391657 cri.go:89] found id: "05e7c087fc88a388e9fce4a8fadcd7c6e045c449280b951b0a69fe971518c8e4"
	I1217 19:28:37.511922  391657 cri.go:89] found id: "030ee45fef3825f728fb878da790fd63c6e2d436f0bdee766e3b5c4313ba91b4"
	I1217 19:28:37.511925  391657 cri.go:89] found id: "e582a6b346e424adf2f6c23b450133f4ec35319edb9a095ef63a9da14924bc85"
	I1217 19:28:37.511928  391657 cri.go:89] found id: "6f1389fbed5a8165c3a7308b7768fbefbb05788ef8d898f075f95f6d5c909bde"
	I1217 19:28:37.511931  391657 cri.go:89] found id: "bb406a59b4704de349007327f30e38ffa01008f88e9504149a856dd758cb8314"
	I1217 19:28:37.511934  391657 cri.go:89] found id: "7927a0e1520a196318cf74495ff2fbd014eaec7890e7757b0c005f92944ba5fe"
	I1217 19:28:37.511937  391657 cri.go:89] found id: "4fd8c32f1f75b8dd6f3a5d4c557a48c965bfed2ee319e9ebc07b83a0498e9614"
	I1217 19:28:37.511940  391657 cri.go:89] found id: "3e0c0283ddfb5e25a2829243a99334aba7fddd2a8ed203b36520a310978711ad"
	I1217 19:28:37.511947  391657 cri.go:89] found id: "1309939d3b4dae1d9b8580e1652131608a79d12222165783d82fd3c6844da7d0"
	I1217 19:28:37.511952  391657 cri.go:89] found id: "8f0c2abe1917b2ff3fe742905d3cbd5e0734c50d00b37c3ae2d6bce65a81b1a4"
	I1217 19:28:37.511957  391657 cri.go:89] found id: "801db4b070e91430b722ceab6c3f6ad31c2b3fba0e4ec61f6575746703230db4"
	I1217 19:28:37.511961  391657 cri.go:89] found id: "c7eea19f4d49e38bd7e7f4cb234216d510d8104890af99fc48c47b7bea1c0bdd"
	I1217 19:28:37.511966  391657 cri.go:89] found id: "51a71566b557a3bb8ac4ee375ce62b941752fa12df3a062db96dfcdd7cf90c18"
	I1217 19:28:37.511970  391657 cri.go:89] found id: "a485e9f994ff95a2a7f3857ba3bac5871f37c7f68fe9a7511385fee343147b8b"
	I1217 19:28:37.511987  391657 cri.go:89] found id: "04f733eceac2431078e28d9b6aa0a99e8ae15495d70be998c595825b5d1bf4f8"
	I1217 19:28:37.511990  391657 cri.go:89] found id: "c3f541802ca322bdfefe59f58465e0b5fc47df46f565bbf169fdf155b6520813"
	I1217 19:28:37.511995  391657 cri.go:89] found id: "e3aca076801c71c61c7d166207a81c454eca7b4579247b6da815893233243960"
	I1217 19:28:37.511998  391657 cri.go:89] found id: "f32dab99d943eec56bf9918ed2f6b53e96fd877cfbbf5192cf7d857f1b776f8e"
	I1217 19:28:37.512000  391657 cri.go:89] found id: "b68b1b351d2b0d7d4628fdbe0a6689c4e3150e140e9149ec00e8886c21c85388"
	I1217 19:28:37.512003  391657 cri.go:89] found id: "bc8813162646db6787344c15bb78bf1f1a23063d72326a728b0a42dafc7c4d56"
	I1217 19:28:37.512006  391657 cri.go:89] found id: "bea3125cf2914bd997ad7c9b382bc666af7c3ef97d39311b120cecf6bfd19b22"
	I1217 19:28:37.512008  391657 cri.go:89] found id: "5875440c2f308ff9ae46bdeb21b7960b61f51fff5f745adf6f9deb63f35cfb16"
	I1217 19:28:37.512011  391657 cri.go:89] found id: "87468d7032ea669744a3be9490a79472140a58976b8a3c756b65a43dbda2d50e"
	I1217 19:28:37.512014  391657 cri.go:89] found id: "fd7cf6d64d69e77f0f93c54b2f5c32210f59f02ec07dbd9708e6d7d40d2b4e33"
	I1217 19:28:37.512017  391657 cri.go:89] found id: ""
	I1217 19:28:37.512064  391657 ssh_runner.go:195] Run: sudo runc list -f json
	I1217 19:28:37.526519  391657 out.go:203] 
	W1217 19:28:37.527861  391657 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T19:28:37Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T19:28:37Z" level=error msg="open /run/runc: no such file or directory"
	
	W1217 19:28:37.527884  391657 out.go:285] * 
	* 
	W1217 19:28:37.531903  391657 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1217 19:28:37.533337  391657 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable ingress-dns addon: args "out/minikube-linux-amd64 -p addons-695107 addons disable ingress-dns --alsologtostderr -v=1": exit status 11
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-695107 addons disable ingress --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-695107 addons disable ingress --alsologtostderr -v=1: exit status 11 (251.313794ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1217 19:28:37.595333  391718 out.go:360] Setting OutFile to fd 1 ...
	I1217 19:28:37.595468  391718 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 19:28:37.595478  391718 out.go:374] Setting ErrFile to fd 2...
	I1217 19:28:37.595483  391718 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 19:28:37.595683  391718 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22186-372245/.minikube/bin
	I1217 19:28:37.596040  391718 mustload.go:66] Loading cluster: addons-695107
	I1217 19:28:37.596405  391718 config.go:182] Loaded profile config "addons-695107": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 19:28:37.596425  391718 addons.go:622] checking whether the cluster is paused
	I1217 19:28:37.596522  391718 config.go:182] Loaded profile config "addons-695107": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 19:28:37.596543  391718 host.go:66] Checking if "addons-695107" exists ...
	I1217 19:28:37.596967  391718 cli_runner.go:164] Run: docker container inspect addons-695107 --format={{.State.Status}}
	I1217 19:28:37.614494  391718 ssh_runner.go:195] Run: systemctl --version
	I1217 19:28:37.614559  391718 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-695107
	I1217 19:28:37.632705  391718 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22186-372245/.minikube/machines/addons-695107/id_rsa Username:docker}
	I1217 19:28:37.734133  391718 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1217 19:28:37.734239  391718 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 19:28:37.764038  391718 cri.go:89] found id: "e83d1036078086aaca80c341c18864a4fa25b95af7b2bca016c4f75ad06315fa"
	I1217 19:28:37.764059  391718 cri.go:89] found id: "05e7c087fc88a388e9fce4a8fadcd7c6e045c449280b951b0a69fe971518c8e4"
	I1217 19:28:37.764063  391718 cri.go:89] found id: "030ee45fef3825f728fb878da790fd63c6e2d436f0bdee766e3b5c4313ba91b4"
	I1217 19:28:37.764067  391718 cri.go:89] found id: "e582a6b346e424adf2f6c23b450133f4ec35319edb9a095ef63a9da14924bc85"
	I1217 19:28:37.764071  391718 cri.go:89] found id: "6f1389fbed5a8165c3a7308b7768fbefbb05788ef8d898f075f95f6d5c909bde"
	I1217 19:28:37.764091  391718 cri.go:89] found id: "bb406a59b4704de349007327f30e38ffa01008f88e9504149a856dd758cb8314"
	I1217 19:28:37.764097  391718 cri.go:89] found id: "7927a0e1520a196318cf74495ff2fbd014eaec7890e7757b0c005f92944ba5fe"
	I1217 19:28:37.764101  391718 cri.go:89] found id: "4fd8c32f1f75b8dd6f3a5d4c557a48c965bfed2ee319e9ebc07b83a0498e9614"
	I1217 19:28:37.764106  391718 cri.go:89] found id: "3e0c0283ddfb5e25a2829243a99334aba7fddd2a8ed203b36520a310978711ad"
	I1217 19:28:37.764119  391718 cri.go:89] found id: "1309939d3b4dae1d9b8580e1652131608a79d12222165783d82fd3c6844da7d0"
	I1217 19:28:37.764122  391718 cri.go:89] found id: "8f0c2abe1917b2ff3fe742905d3cbd5e0734c50d00b37c3ae2d6bce65a81b1a4"
	I1217 19:28:37.764125  391718 cri.go:89] found id: "801db4b070e91430b722ceab6c3f6ad31c2b3fba0e4ec61f6575746703230db4"
	I1217 19:28:37.764128  391718 cri.go:89] found id: "c7eea19f4d49e38bd7e7f4cb234216d510d8104890af99fc48c47b7bea1c0bdd"
	I1217 19:28:37.764131  391718 cri.go:89] found id: "51a71566b557a3bb8ac4ee375ce62b941752fa12df3a062db96dfcdd7cf90c18"
	I1217 19:28:37.764134  391718 cri.go:89] found id: "a485e9f994ff95a2a7f3857ba3bac5871f37c7f68fe9a7511385fee343147b8b"
	I1217 19:28:37.764138  391718 cri.go:89] found id: "04f733eceac2431078e28d9b6aa0a99e8ae15495d70be998c595825b5d1bf4f8"
	I1217 19:28:37.764145  391718 cri.go:89] found id: "c3f541802ca322bdfefe59f58465e0b5fc47df46f565bbf169fdf155b6520813"
	I1217 19:28:37.764149  391718 cri.go:89] found id: "e3aca076801c71c61c7d166207a81c454eca7b4579247b6da815893233243960"
	I1217 19:28:37.764152  391718 cri.go:89] found id: "f32dab99d943eec56bf9918ed2f6b53e96fd877cfbbf5192cf7d857f1b776f8e"
	I1217 19:28:37.764160  391718 cri.go:89] found id: "b68b1b351d2b0d7d4628fdbe0a6689c4e3150e140e9149ec00e8886c21c85388"
	I1217 19:28:37.764165  391718 cri.go:89] found id: "bc8813162646db6787344c15bb78bf1f1a23063d72326a728b0a42dafc7c4d56"
	I1217 19:28:37.764168  391718 cri.go:89] found id: "bea3125cf2914bd997ad7c9b382bc666af7c3ef97d39311b120cecf6bfd19b22"
	I1217 19:28:37.764170  391718 cri.go:89] found id: "5875440c2f308ff9ae46bdeb21b7960b61f51fff5f745adf6f9deb63f35cfb16"
	I1217 19:28:37.764173  391718 cri.go:89] found id: "87468d7032ea669744a3be9490a79472140a58976b8a3c756b65a43dbda2d50e"
	I1217 19:28:37.764175  391718 cri.go:89] found id: "fd7cf6d64d69e77f0f93c54b2f5c32210f59f02ec07dbd9708e6d7d40d2b4e33"
	I1217 19:28:37.764178  391718 cri.go:89] found id: ""
	I1217 19:28:37.764228  391718 ssh_runner.go:195] Run: sudo runc list -f json
	I1217 19:28:37.778755  391718 out.go:203] 
	W1217 19:28:37.780024  391718 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T19:28:37Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T19:28:37Z" level=error msg="open /run/runc: no such file or directory"
	
	W1217 19:28:37.780047  391718 out.go:285] * 
	* 
	W1217 19:28:37.783992  391718 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1217 19:28:37.785436  391718 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable ingress addon: args "out/minikube-linux-amd64 -p addons-695107 addons disable ingress --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Ingress (146.01s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (5.26s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:825: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:353: "gadget-7dc2q" [44a7cb95-73b6-4343-8935-df74565dfb8c] Running
addons_test.go:825: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.004434496s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-695107 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-695107 addons disable inspektor-gadget --alsologtostderr -v=1: exit status 11 (258.754598ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1217 19:26:14.218216  387277 out.go:360] Setting OutFile to fd 1 ...
	I1217 19:26:14.218468  387277 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 19:26:14.218476  387277 out.go:374] Setting ErrFile to fd 2...
	I1217 19:26:14.218481  387277 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 19:26:14.219118  387277 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22186-372245/.minikube/bin
	I1217 19:26:14.219619  387277 mustload.go:66] Loading cluster: addons-695107
	I1217 19:26:14.220347  387277 config.go:182] Loaded profile config "addons-695107": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 19:26:14.220371  387277 addons.go:622] checking whether the cluster is paused
	I1217 19:26:14.220486  387277 config.go:182] Loaded profile config "addons-695107": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 19:26:14.220504  387277 host.go:66] Checking if "addons-695107" exists ...
	I1217 19:26:14.221023  387277 cli_runner.go:164] Run: docker container inspect addons-695107 --format={{.State.Status}}
	I1217 19:26:14.239312  387277 ssh_runner.go:195] Run: systemctl --version
	I1217 19:26:14.239376  387277 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-695107
	I1217 19:26:14.257009  387277 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22186-372245/.minikube/machines/addons-695107/id_rsa Username:docker}
	I1217 19:26:14.359500  387277 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1217 19:26:14.359606  387277 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 19:26:14.390006  387277 cri.go:89] found id: "05e7c087fc88a388e9fce4a8fadcd7c6e045c449280b951b0a69fe971518c8e4"
	I1217 19:26:14.390035  387277 cri.go:89] found id: "030ee45fef3825f728fb878da790fd63c6e2d436f0bdee766e3b5c4313ba91b4"
	I1217 19:26:14.390039  387277 cri.go:89] found id: "e582a6b346e424adf2f6c23b450133f4ec35319edb9a095ef63a9da14924bc85"
	I1217 19:26:14.390042  387277 cri.go:89] found id: "6f1389fbed5a8165c3a7308b7768fbefbb05788ef8d898f075f95f6d5c909bde"
	I1217 19:26:14.390046  387277 cri.go:89] found id: "bb406a59b4704de349007327f30e38ffa01008f88e9504149a856dd758cb8314"
	I1217 19:26:14.390049  387277 cri.go:89] found id: "7927a0e1520a196318cf74495ff2fbd014eaec7890e7757b0c005f92944ba5fe"
	I1217 19:26:14.390051  387277 cri.go:89] found id: "4fd8c32f1f75b8dd6f3a5d4c557a48c965bfed2ee319e9ebc07b83a0498e9614"
	I1217 19:26:14.390054  387277 cri.go:89] found id: "3e0c0283ddfb5e25a2829243a99334aba7fddd2a8ed203b36520a310978711ad"
	I1217 19:26:14.390057  387277 cri.go:89] found id: "1309939d3b4dae1d9b8580e1652131608a79d12222165783d82fd3c6844da7d0"
	I1217 19:26:14.390062  387277 cri.go:89] found id: "8f0c2abe1917b2ff3fe742905d3cbd5e0734c50d00b37c3ae2d6bce65a81b1a4"
	I1217 19:26:14.390065  387277 cri.go:89] found id: "801db4b070e91430b722ceab6c3f6ad31c2b3fba0e4ec61f6575746703230db4"
	I1217 19:26:14.390067  387277 cri.go:89] found id: "c7eea19f4d49e38bd7e7f4cb234216d510d8104890af99fc48c47b7bea1c0bdd"
	I1217 19:26:14.390071  387277 cri.go:89] found id: "51a71566b557a3bb8ac4ee375ce62b941752fa12df3a062db96dfcdd7cf90c18"
	I1217 19:26:14.390073  387277 cri.go:89] found id: "a485e9f994ff95a2a7f3857ba3bac5871f37c7f68fe9a7511385fee343147b8b"
	I1217 19:26:14.390092  387277 cri.go:89] found id: "04f733eceac2431078e28d9b6aa0a99e8ae15495d70be998c595825b5d1bf4f8"
	I1217 19:26:14.390110  387277 cri.go:89] found id: "c3f541802ca322bdfefe59f58465e0b5fc47df46f565bbf169fdf155b6520813"
	I1217 19:26:14.390121  387277 cri.go:89] found id: "e3aca076801c71c61c7d166207a81c454eca7b4579247b6da815893233243960"
	I1217 19:26:14.390128  387277 cri.go:89] found id: "f32dab99d943eec56bf9918ed2f6b53e96fd877cfbbf5192cf7d857f1b776f8e"
	I1217 19:26:14.390133  387277 cri.go:89] found id: "b68b1b351d2b0d7d4628fdbe0a6689c4e3150e140e9149ec00e8886c21c85388"
	I1217 19:26:14.390136  387277 cri.go:89] found id: "bc8813162646db6787344c15bb78bf1f1a23063d72326a728b0a42dafc7c4d56"
	I1217 19:26:14.390139  387277 cri.go:89] found id: "bea3125cf2914bd997ad7c9b382bc666af7c3ef97d39311b120cecf6bfd19b22"
	I1217 19:26:14.390142  387277 cri.go:89] found id: "5875440c2f308ff9ae46bdeb21b7960b61f51fff5f745adf6f9deb63f35cfb16"
	I1217 19:26:14.390144  387277 cri.go:89] found id: "87468d7032ea669744a3be9490a79472140a58976b8a3c756b65a43dbda2d50e"
	I1217 19:26:14.390147  387277 cri.go:89] found id: "fd7cf6d64d69e77f0f93c54b2f5c32210f59f02ec07dbd9708e6d7d40d2b4e33"
	I1217 19:26:14.390149  387277 cri.go:89] found id: ""
	I1217 19:26:14.390192  387277 ssh_runner.go:195] Run: sudo runc list -f json
	I1217 19:26:14.405156  387277 out.go:203] 
	W1217 19:26:14.406626  387277 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T19:26:14Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T19:26:14Z" level=error msg="open /run/runc: no such file or directory"
	
	W1217 19:26:14.406645  387277 out.go:285] * 
	* 
	W1217 19:26:14.410594  387277 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1217 19:26:14.412025  387277 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable inspektor-gadget addon: args "out/minikube-linux-amd64 -p addons-695107 addons disable inspektor-gadget --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/InspektorGadget (5.26s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.34s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:457: metrics-server stabilized in 3.260102ms
addons_test.go:459: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:353: "metrics-server-85b7d694d7-tqbbx" [f8c2c133-1dbb-4007-8e9f-dbd891b5c4e1] Running
addons_test.go:459: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.002918308s
addons_test.go:465: (dbg) Run:  kubectl --context addons-695107 top pods -n kube-system
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-695107 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-695107 addons disable metrics-server --alsologtostderr -v=1: exit status 11 (264.913571ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1217 19:26:11.642623  386544 out.go:360] Setting OutFile to fd 1 ...
	I1217 19:26:11.642900  386544 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 19:26:11.642913  386544 out.go:374] Setting ErrFile to fd 2...
	I1217 19:26:11.642917  386544 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 19:26:11.643236  386544 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22186-372245/.minikube/bin
	I1217 19:26:11.643837  386544 mustload.go:66] Loading cluster: addons-695107
	I1217 19:26:11.644730  386544 config.go:182] Loaded profile config "addons-695107": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 19:26:11.644756  386544 addons.go:622] checking whether the cluster is paused
	I1217 19:26:11.644910  386544 config.go:182] Loaded profile config "addons-695107": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 19:26:11.644929  386544 host.go:66] Checking if "addons-695107" exists ...
	I1217 19:26:11.645409  386544 cli_runner.go:164] Run: docker container inspect addons-695107 --format={{.State.Status}}
	I1217 19:26:11.664694  386544 ssh_runner.go:195] Run: systemctl --version
	I1217 19:26:11.664748  386544 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-695107
	I1217 19:26:11.683694  386544 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22186-372245/.minikube/machines/addons-695107/id_rsa Username:docker}
	I1217 19:26:11.784890  386544 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1217 19:26:11.784964  386544 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 19:26:11.818394  386544 cri.go:89] found id: "05e7c087fc88a388e9fce4a8fadcd7c6e045c449280b951b0a69fe971518c8e4"
	I1217 19:26:11.818425  386544 cri.go:89] found id: "030ee45fef3825f728fb878da790fd63c6e2d436f0bdee766e3b5c4313ba91b4"
	I1217 19:26:11.818429  386544 cri.go:89] found id: "e582a6b346e424adf2f6c23b450133f4ec35319edb9a095ef63a9da14924bc85"
	I1217 19:26:11.818431  386544 cri.go:89] found id: "6f1389fbed5a8165c3a7308b7768fbefbb05788ef8d898f075f95f6d5c909bde"
	I1217 19:26:11.818435  386544 cri.go:89] found id: "bb406a59b4704de349007327f30e38ffa01008f88e9504149a856dd758cb8314"
	I1217 19:26:11.818438  386544 cri.go:89] found id: "7927a0e1520a196318cf74495ff2fbd014eaec7890e7757b0c005f92944ba5fe"
	I1217 19:26:11.818441  386544 cri.go:89] found id: "4fd8c32f1f75b8dd6f3a5d4c557a48c965bfed2ee319e9ebc07b83a0498e9614"
	I1217 19:26:11.818444  386544 cri.go:89] found id: "3e0c0283ddfb5e25a2829243a99334aba7fddd2a8ed203b36520a310978711ad"
	I1217 19:26:11.818447  386544 cri.go:89] found id: "1309939d3b4dae1d9b8580e1652131608a79d12222165783d82fd3c6844da7d0"
	I1217 19:26:11.818453  386544 cri.go:89] found id: "8f0c2abe1917b2ff3fe742905d3cbd5e0734c50d00b37c3ae2d6bce65a81b1a4"
	I1217 19:26:11.818457  386544 cri.go:89] found id: "801db4b070e91430b722ceab6c3f6ad31c2b3fba0e4ec61f6575746703230db4"
	I1217 19:26:11.818462  386544 cri.go:89] found id: "c7eea19f4d49e38bd7e7f4cb234216d510d8104890af99fc48c47b7bea1c0bdd"
	I1217 19:26:11.818466  386544 cri.go:89] found id: "51a71566b557a3bb8ac4ee375ce62b941752fa12df3a062db96dfcdd7cf90c18"
	I1217 19:26:11.818470  386544 cri.go:89] found id: "a485e9f994ff95a2a7f3857ba3bac5871f37c7f68fe9a7511385fee343147b8b"
	I1217 19:26:11.818473  386544 cri.go:89] found id: "04f733eceac2431078e28d9b6aa0a99e8ae15495d70be998c595825b5d1bf4f8"
	I1217 19:26:11.818491  386544 cri.go:89] found id: "c3f541802ca322bdfefe59f58465e0b5fc47df46f565bbf169fdf155b6520813"
	I1217 19:26:11.818502  386544 cri.go:89] found id: "e3aca076801c71c61c7d166207a81c454eca7b4579247b6da815893233243960"
	I1217 19:26:11.818508  386544 cri.go:89] found id: "f32dab99d943eec56bf9918ed2f6b53e96fd877cfbbf5192cf7d857f1b776f8e"
	I1217 19:26:11.818512  386544 cri.go:89] found id: "b68b1b351d2b0d7d4628fdbe0a6689c4e3150e140e9149ec00e8886c21c85388"
	I1217 19:26:11.818517  386544 cri.go:89] found id: "bc8813162646db6787344c15bb78bf1f1a23063d72326a728b0a42dafc7c4d56"
	I1217 19:26:11.818522  386544 cri.go:89] found id: "bea3125cf2914bd997ad7c9b382bc666af7c3ef97d39311b120cecf6bfd19b22"
	I1217 19:26:11.818527  386544 cri.go:89] found id: "5875440c2f308ff9ae46bdeb21b7960b61f51fff5f745adf6f9deb63f35cfb16"
	I1217 19:26:11.818530  386544 cri.go:89] found id: "87468d7032ea669744a3be9490a79472140a58976b8a3c756b65a43dbda2d50e"
	I1217 19:26:11.818533  386544 cri.go:89] found id: "fd7cf6d64d69e77f0f93c54b2f5c32210f59f02ec07dbd9708e6d7d40d2b4e33"
	I1217 19:26:11.818536  386544 cri.go:89] found id: ""
	I1217 19:26:11.818602  386544 ssh_runner.go:195] Run: sudo runc list -f json
	I1217 19:26:11.834229  386544 out.go:203] 
	W1217 19:26:11.835545  386544 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T19:26:11Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T19:26:11Z" level=error msg="open /run/runc: no such file or directory"
	
	W1217 19:26:11.835581  386544 out.go:285] * 
	* 
	W1217 19:26:11.840619  386544 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1217 19:26:11.841950  386544 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable metrics-server addon: args "out/minikube-linux-amd64 -p addons-695107 addons disable metrics-server --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/MetricsServer (5.34s)

                                                
                                    
x
+
TestAddons/parallel/CSI (35.91s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1217 19:26:19.720859  375797 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1217 19:26:19.724669  375797 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1217 19:26:19.724697  375797 kapi.go:107] duration metric: took 3.854077ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:551: csi-hostpath-driver pods stabilized in 3.868429ms
addons_test.go:554: (dbg) Run:  kubectl --context addons-695107 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:559: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-695107 get pvc hpvc -o jsonpath={.status.phase} -n default
2025/12/17 19:26:19 [DEBUG] GET http://192.168.49.2:5000
helpers_test.go:403: (dbg) Run:  kubectl --context addons-695107 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-695107 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-695107 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-695107 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-695107 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-695107 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-695107 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-695107 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-695107 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-695107 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:564: (dbg) Run:  kubectl --context addons-695107 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:569: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:353: "task-pv-pod" [e86f935a-155e-4fa0-94c3-361332b93d27] Pending
helpers_test.go:353: "task-pv-pod" [e86f935a-155e-4fa0-94c3-361332b93d27] Running
addons_test.go:569: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 7.004506707s
addons_test.go:574: (dbg) Run:  kubectl --context addons-695107 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:579: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:428: (dbg) Run:  kubectl --context addons-695107 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:428: (dbg) Run:  kubectl --context addons-695107 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:584: (dbg) Run:  kubectl --context addons-695107 delete pod task-pv-pod
addons_test.go:590: (dbg) Run:  kubectl --context addons-695107 delete pvc hpvc
addons_test.go:596: (dbg) Run:  kubectl --context addons-695107 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:601: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-695107 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-695107 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-695107 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-695107 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-695107 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-695107 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-695107 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-695107 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-695107 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-695107 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:606: (dbg) Run:  kubectl --context addons-695107 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:611: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:353: "task-pv-pod-restore" [035b6135-5124-4d49-8c90-14d512a9172f] Pending
helpers_test.go:353: "task-pv-pod-restore" [035b6135-5124-4d49-8c90-14d512a9172f] Running
addons_test.go:611: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 6.003651477s
addons_test.go:616: (dbg) Run:  kubectl --context addons-695107 delete pod task-pv-pod-restore
addons_test.go:620: (dbg) Run:  kubectl --context addons-695107 delete pvc hpvc-restore
addons_test.go:624: (dbg) Run:  kubectl --context addons-695107 delete volumesnapshot new-snapshot-demo
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-695107 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-695107 addons disable volumesnapshots --alsologtostderr -v=1: exit status 11 (271.052488ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1217 19:26:55.155094  389552 out.go:360] Setting OutFile to fd 1 ...
	I1217 19:26:55.155430  389552 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 19:26:55.155448  389552 out.go:374] Setting ErrFile to fd 2...
	I1217 19:26:55.155455  389552 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 19:26:55.155680  389552 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22186-372245/.minikube/bin
	I1217 19:26:55.156050  389552 mustload.go:66] Loading cluster: addons-695107
	I1217 19:26:55.156452  389552 config.go:182] Loaded profile config "addons-695107": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 19:26:55.156475  389552 addons.go:622] checking whether the cluster is paused
	I1217 19:26:55.156574  389552 config.go:182] Loaded profile config "addons-695107": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 19:26:55.156591  389552 host.go:66] Checking if "addons-695107" exists ...
	I1217 19:26:55.157044  389552 cli_runner.go:164] Run: docker container inspect addons-695107 --format={{.State.Status}}
	I1217 19:26:55.177859  389552 ssh_runner.go:195] Run: systemctl --version
	I1217 19:26:55.177930  389552 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-695107
	I1217 19:26:55.200915  389552 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22186-372245/.minikube/machines/addons-695107/id_rsa Username:docker}
	I1217 19:26:55.305388  389552 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1217 19:26:55.305493  389552 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 19:26:55.337359  389552 cri.go:89] found id: "e83d1036078086aaca80c341c18864a4fa25b95af7b2bca016c4f75ad06315fa"
	I1217 19:26:55.337391  389552 cri.go:89] found id: "05e7c087fc88a388e9fce4a8fadcd7c6e045c449280b951b0a69fe971518c8e4"
	I1217 19:26:55.337396  389552 cri.go:89] found id: "030ee45fef3825f728fb878da790fd63c6e2d436f0bdee766e3b5c4313ba91b4"
	I1217 19:26:55.337399  389552 cri.go:89] found id: "e582a6b346e424adf2f6c23b450133f4ec35319edb9a095ef63a9da14924bc85"
	I1217 19:26:55.337402  389552 cri.go:89] found id: "6f1389fbed5a8165c3a7308b7768fbefbb05788ef8d898f075f95f6d5c909bde"
	I1217 19:26:55.337407  389552 cri.go:89] found id: "bb406a59b4704de349007327f30e38ffa01008f88e9504149a856dd758cb8314"
	I1217 19:26:55.337409  389552 cri.go:89] found id: "7927a0e1520a196318cf74495ff2fbd014eaec7890e7757b0c005f92944ba5fe"
	I1217 19:26:55.337412  389552 cri.go:89] found id: "4fd8c32f1f75b8dd6f3a5d4c557a48c965bfed2ee319e9ebc07b83a0498e9614"
	I1217 19:26:55.337415  389552 cri.go:89] found id: "3e0c0283ddfb5e25a2829243a99334aba7fddd2a8ed203b36520a310978711ad"
	I1217 19:26:55.337425  389552 cri.go:89] found id: "1309939d3b4dae1d9b8580e1652131608a79d12222165783d82fd3c6844da7d0"
	I1217 19:26:55.337428  389552 cri.go:89] found id: "8f0c2abe1917b2ff3fe742905d3cbd5e0734c50d00b37c3ae2d6bce65a81b1a4"
	I1217 19:26:55.337430  389552 cri.go:89] found id: "801db4b070e91430b722ceab6c3f6ad31c2b3fba0e4ec61f6575746703230db4"
	I1217 19:26:55.337433  389552 cri.go:89] found id: "c7eea19f4d49e38bd7e7f4cb234216d510d8104890af99fc48c47b7bea1c0bdd"
	I1217 19:26:55.337435  389552 cri.go:89] found id: "51a71566b557a3bb8ac4ee375ce62b941752fa12df3a062db96dfcdd7cf90c18"
	I1217 19:26:55.337438  389552 cri.go:89] found id: "a485e9f994ff95a2a7f3857ba3bac5871f37c7f68fe9a7511385fee343147b8b"
	I1217 19:26:55.337450  389552 cri.go:89] found id: "04f733eceac2431078e28d9b6aa0a99e8ae15495d70be998c595825b5d1bf4f8"
	I1217 19:26:55.337455  389552 cri.go:89] found id: "c3f541802ca322bdfefe59f58465e0b5fc47df46f565bbf169fdf155b6520813"
	I1217 19:26:55.337459  389552 cri.go:89] found id: "e3aca076801c71c61c7d166207a81c454eca7b4579247b6da815893233243960"
	I1217 19:26:55.337462  389552 cri.go:89] found id: "f32dab99d943eec56bf9918ed2f6b53e96fd877cfbbf5192cf7d857f1b776f8e"
	I1217 19:26:55.337465  389552 cri.go:89] found id: "b68b1b351d2b0d7d4628fdbe0a6689c4e3150e140e9149ec00e8886c21c85388"
	I1217 19:26:55.337468  389552 cri.go:89] found id: "bc8813162646db6787344c15bb78bf1f1a23063d72326a728b0a42dafc7c4d56"
	I1217 19:26:55.337470  389552 cri.go:89] found id: "bea3125cf2914bd997ad7c9b382bc666af7c3ef97d39311b120cecf6bfd19b22"
	I1217 19:26:55.337473  389552 cri.go:89] found id: "5875440c2f308ff9ae46bdeb21b7960b61f51fff5f745adf6f9deb63f35cfb16"
	I1217 19:26:55.337476  389552 cri.go:89] found id: "87468d7032ea669744a3be9490a79472140a58976b8a3c756b65a43dbda2d50e"
	I1217 19:26:55.337479  389552 cri.go:89] found id: "fd7cf6d64d69e77f0f93c54b2f5c32210f59f02ec07dbd9708e6d7d40d2b4e33"
	I1217 19:26:55.337482  389552 cri.go:89] found id: ""
	I1217 19:26:55.337533  389552 ssh_runner.go:195] Run: sudo runc list -f json
	I1217 19:26:55.352360  389552 out.go:203] 
	W1217 19:26:55.353862  389552 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T19:26:55Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T19:26:55Z" level=error msg="open /run/runc: no such file or directory"
	
	W1217 19:26:55.353890  389552 out.go:285] * 
	* 
	W1217 19:26:55.358006  389552 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1217 19:26:55.359604  389552 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable volumesnapshots addon: args "out/minikube-linux-amd64 -p addons-695107 addons disable volumesnapshots --alsologtostderr -v=1": exit status 11
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-695107 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-695107 addons disable csi-hostpath-driver --alsologtostderr -v=1: exit status 11 (258.633613ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1217 19:26:55.426369  389616 out.go:360] Setting OutFile to fd 1 ...
	I1217 19:26:55.426492  389616 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 19:26:55.426501  389616 out.go:374] Setting ErrFile to fd 2...
	I1217 19:26:55.426505  389616 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 19:26:55.426704  389616 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22186-372245/.minikube/bin
	I1217 19:26:55.426960  389616 mustload.go:66] Loading cluster: addons-695107
	I1217 19:26:55.427383  389616 config.go:182] Loaded profile config "addons-695107": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 19:26:55.427406  389616 addons.go:622] checking whether the cluster is paused
	I1217 19:26:55.427512  389616 config.go:182] Loaded profile config "addons-695107": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 19:26:55.427528  389616 host.go:66] Checking if "addons-695107" exists ...
	I1217 19:26:55.427982  389616 cli_runner.go:164] Run: docker container inspect addons-695107 --format={{.State.Status}}
	I1217 19:26:55.447059  389616 ssh_runner.go:195] Run: systemctl --version
	I1217 19:26:55.447158  389616 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-695107
	I1217 19:26:55.465518  389616 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22186-372245/.minikube/machines/addons-695107/id_rsa Username:docker}
	I1217 19:26:55.567967  389616 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1217 19:26:55.568102  389616 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 19:26:55.597350  389616 cri.go:89] found id: "e83d1036078086aaca80c341c18864a4fa25b95af7b2bca016c4f75ad06315fa"
	I1217 19:26:55.597376  389616 cri.go:89] found id: "05e7c087fc88a388e9fce4a8fadcd7c6e045c449280b951b0a69fe971518c8e4"
	I1217 19:26:55.597380  389616 cri.go:89] found id: "030ee45fef3825f728fb878da790fd63c6e2d436f0bdee766e3b5c4313ba91b4"
	I1217 19:26:55.597384  389616 cri.go:89] found id: "e582a6b346e424adf2f6c23b450133f4ec35319edb9a095ef63a9da14924bc85"
	I1217 19:26:55.597387  389616 cri.go:89] found id: "6f1389fbed5a8165c3a7308b7768fbefbb05788ef8d898f075f95f6d5c909bde"
	I1217 19:26:55.597390  389616 cri.go:89] found id: "bb406a59b4704de349007327f30e38ffa01008f88e9504149a856dd758cb8314"
	I1217 19:26:55.597393  389616 cri.go:89] found id: "7927a0e1520a196318cf74495ff2fbd014eaec7890e7757b0c005f92944ba5fe"
	I1217 19:26:55.597396  389616 cri.go:89] found id: "4fd8c32f1f75b8dd6f3a5d4c557a48c965bfed2ee319e9ebc07b83a0498e9614"
	I1217 19:26:55.597399  389616 cri.go:89] found id: "3e0c0283ddfb5e25a2829243a99334aba7fddd2a8ed203b36520a310978711ad"
	I1217 19:26:55.597405  389616 cri.go:89] found id: "1309939d3b4dae1d9b8580e1652131608a79d12222165783d82fd3c6844da7d0"
	I1217 19:26:55.597408  389616 cri.go:89] found id: "8f0c2abe1917b2ff3fe742905d3cbd5e0734c50d00b37c3ae2d6bce65a81b1a4"
	I1217 19:26:55.597418  389616 cri.go:89] found id: "801db4b070e91430b722ceab6c3f6ad31c2b3fba0e4ec61f6575746703230db4"
	I1217 19:26:55.597421  389616 cri.go:89] found id: "c7eea19f4d49e38bd7e7f4cb234216d510d8104890af99fc48c47b7bea1c0bdd"
	I1217 19:26:55.597424  389616 cri.go:89] found id: "51a71566b557a3bb8ac4ee375ce62b941752fa12df3a062db96dfcdd7cf90c18"
	I1217 19:26:55.597427  389616 cri.go:89] found id: "a485e9f994ff95a2a7f3857ba3bac5871f37c7f68fe9a7511385fee343147b8b"
	I1217 19:26:55.597445  389616 cri.go:89] found id: "04f733eceac2431078e28d9b6aa0a99e8ae15495d70be998c595825b5d1bf4f8"
	I1217 19:26:55.597454  389616 cri.go:89] found id: "c3f541802ca322bdfefe59f58465e0b5fc47df46f565bbf169fdf155b6520813"
	I1217 19:26:55.597459  389616 cri.go:89] found id: "e3aca076801c71c61c7d166207a81c454eca7b4579247b6da815893233243960"
	I1217 19:26:55.597462  389616 cri.go:89] found id: "f32dab99d943eec56bf9918ed2f6b53e96fd877cfbbf5192cf7d857f1b776f8e"
	I1217 19:26:55.597464  389616 cri.go:89] found id: "b68b1b351d2b0d7d4628fdbe0a6689c4e3150e140e9149ec00e8886c21c85388"
	I1217 19:26:55.597470  389616 cri.go:89] found id: "bc8813162646db6787344c15bb78bf1f1a23063d72326a728b0a42dafc7c4d56"
	I1217 19:26:55.597473  389616 cri.go:89] found id: "bea3125cf2914bd997ad7c9b382bc666af7c3ef97d39311b120cecf6bfd19b22"
	I1217 19:26:55.597476  389616 cri.go:89] found id: "5875440c2f308ff9ae46bdeb21b7960b61f51fff5f745adf6f9deb63f35cfb16"
	I1217 19:26:55.597478  389616 cri.go:89] found id: "87468d7032ea669744a3be9490a79472140a58976b8a3c756b65a43dbda2d50e"
	I1217 19:26:55.597481  389616 cri.go:89] found id: "fd7cf6d64d69e77f0f93c54b2f5c32210f59f02ec07dbd9708e6d7d40d2b4e33"
	I1217 19:26:55.597483  389616 cri.go:89] found id: ""
	I1217 19:26:55.597523  389616 ssh_runner.go:195] Run: sudo runc list -f json
	I1217 19:26:55.612654  389616 out.go:203] 
	W1217 19:26:55.614132  389616 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T19:26:55Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T19:26:55Z" level=error msg="open /run/runc: no such file or directory"
	
	W1217 19:26:55.614157  389616 out.go:285] * 
	* 
	W1217 19:26:55.618109  389616 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1217 19:26:55.619798  389616 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable csi-hostpath-driver addon: args "out/minikube-linux-amd64 -p addons-695107 addons disable csi-hostpath-driver --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CSI (35.91s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (2.64s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:810: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-695107 --alsologtostderr -v=1
addons_test.go:810: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable headlamp -p addons-695107 --alsologtostderr -v=1: exit status 11 (256.941772ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1217 19:26:06.571874  385700 out.go:360] Setting OutFile to fd 1 ...
	I1217 19:26:06.572154  385700 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 19:26:06.572163  385700 out.go:374] Setting ErrFile to fd 2...
	I1217 19:26:06.572167  385700 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 19:26:06.572389  385700 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22186-372245/.minikube/bin
	I1217 19:26:06.572657  385700 mustload.go:66] Loading cluster: addons-695107
	I1217 19:26:06.572954  385700 config.go:182] Loaded profile config "addons-695107": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 19:26:06.572969  385700 addons.go:622] checking whether the cluster is paused
	I1217 19:26:06.573057  385700 config.go:182] Loaded profile config "addons-695107": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 19:26:06.573069  385700 host.go:66] Checking if "addons-695107" exists ...
	I1217 19:26:06.573508  385700 cli_runner.go:164] Run: docker container inspect addons-695107 --format={{.State.Status}}
	I1217 19:26:06.592179  385700 ssh_runner.go:195] Run: systemctl --version
	I1217 19:26:06.592264  385700 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-695107
	I1217 19:26:06.610456  385700 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22186-372245/.minikube/machines/addons-695107/id_rsa Username:docker}
	I1217 19:26:06.711150  385700 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1217 19:26:06.711239  385700 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 19:26:06.740412  385700 cri.go:89] found id: "05e7c087fc88a388e9fce4a8fadcd7c6e045c449280b951b0a69fe971518c8e4"
	I1217 19:26:06.740461  385700 cri.go:89] found id: "030ee45fef3825f728fb878da790fd63c6e2d436f0bdee766e3b5c4313ba91b4"
	I1217 19:26:06.740473  385700 cri.go:89] found id: "e582a6b346e424adf2f6c23b450133f4ec35319edb9a095ef63a9da14924bc85"
	I1217 19:26:06.740479  385700 cri.go:89] found id: "6f1389fbed5a8165c3a7308b7768fbefbb05788ef8d898f075f95f6d5c909bde"
	I1217 19:26:06.740484  385700 cri.go:89] found id: "bb406a59b4704de349007327f30e38ffa01008f88e9504149a856dd758cb8314"
	I1217 19:26:06.740492  385700 cri.go:89] found id: "7927a0e1520a196318cf74495ff2fbd014eaec7890e7757b0c005f92944ba5fe"
	I1217 19:26:06.740498  385700 cri.go:89] found id: "4fd8c32f1f75b8dd6f3a5d4c557a48c965bfed2ee319e9ebc07b83a0498e9614"
	I1217 19:26:06.740504  385700 cri.go:89] found id: "3e0c0283ddfb5e25a2829243a99334aba7fddd2a8ed203b36520a310978711ad"
	I1217 19:26:06.740510  385700 cri.go:89] found id: "1309939d3b4dae1d9b8580e1652131608a79d12222165783d82fd3c6844da7d0"
	I1217 19:26:06.740527  385700 cri.go:89] found id: "8f0c2abe1917b2ff3fe742905d3cbd5e0734c50d00b37c3ae2d6bce65a81b1a4"
	I1217 19:26:06.740538  385700 cri.go:89] found id: "801db4b070e91430b722ceab6c3f6ad31c2b3fba0e4ec61f6575746703230db4"
	I1217 19:26:06.740544  385700 cri.go:89] found id: "c7eea19f4d49e38bd7e7f4cb234216d510d8104890af99fc48c47b7bea1c0bdd"
	I1217 19:26:06.740551  385700 cri.go:89] found id: "51a71566b557a3bb8ac4ee375ce62b941752fa12df3a062db96dfcdd7cf90c18"
	I1217 19:26:06.740560  385700 cri.go:89] found id: "a485e9f994ff95a2a7f3857ba3bac5871f37c7f68fe9a7511385fee343147b8b"
	I1217 19:26:06.740567  385700 cri.go:89] found id: "04f733eceac2431078e28d9b6aa0a99e8ae15495d70be998c595825b5d1bf4f8"
	I1217 19:26:06.740593  385700 cri.go:89] found id: "c3f541802ca322bdfefe59f58465e0b5fc47df46f565bbf169fdf155b6520813"
	I1217 19:26:06.740604  385700 cri.go:89] found id: "e3aca076801c71c61c7d166207a81c454eca7b4579247b6da815893233243960"
	I1217 19:26:06.740612  385700 cri.go:89] found id: "f32dab99d943eec56bf9918ed2f6b53e96fd877cfbbf5192cf7d857f1b776f8e"
	I1217 19:26:06.740618  385700 cri.go:89] found id: "b68b1b351d2b0d7d4628fdbe0a6689c4e3150e140e9149ec00e8886c21c85388"
	I1217 19:26:06.740622  385700 cri.go:89] found id: "bc8813162646db6787344c15bb78bf1f1a23063d72326a728b0a42dafc7c4d56"
	I1217 19:26:06.740627  385700 cri.go:89] found id: "bea3125cf2914bd997ad7c9b382bc666af7c3ef97d39311b120cecf6bfd19b22"
	I1217 19:26:06.740632  385700 cri.go:89] found id: "5875440c2f308ff9ae46bdeb21b7960b61f51fff5f745adf6f9deb63f35cfb16"
	I1217 19:26:06.740637  385700 cri.go:89] found id: "87468d7032ea669744a3be9490a79472140a58976b8a3c756b65a43dbda2d50e"
	I1217 19:26:06.740643  385700 cri.go:89] found id: "fd7cf6d64d69e77f0f93c54b2f5c32210f59f02ec07dbd9708e6d7d40d2b4e33"
	I1217 19:26:06.740651  385700 cri.go:89] found id: ""
	I1217 19:26:06.740711  385700 ssh_runner.go:195] Run: sudo runc list -f json
	I1217 19:26:06.755805  385700 out.go:203] 
	W1217 19:26:06.756917  385700 out.go:285] X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T19:26:06Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T19:26:06Z" level=error msg="open /run/runc: no such file or directory"
	
	W1217 19:26:06.756939  385700 out.go:285] * 
	* 
	W1217 19:26:06.760816  385700 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1217 19:26:06.762192  385700 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:812: failed to enable headlamp addon: args: "out/minikube-linux-amd64 addons enable headlamp -p addons-695107 --alsologtostderr -v=1": exit status 11
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestAddons/parallel/Headlamp]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestAddons/parallel/Headlamp]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect addons-695107
helpers_test.go:244: (dbg) docker inspect addons-695107:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "a25be454b6b6755f669ac6ad734c4c39a3256155d18fbf1593189c0c5d90760b",
	        "Created": "2025-12-17T19:24:47.200826359Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 378208,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-17T19:24:47.241881892Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:e3abeb065413b7566dd42e98e204ab3ad174790743f1f5cd427036c11b49d7f1",
	        "ResolvConfPath": "/var/lib/docker/containers/a25be454b6b6755f669ac6ad734c4c39a3256155d18fbf1593189c0c5d90760b/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/a25be454b6b6755f669ac6ad734c4c39a3256155d18fbf1593189c0c5d90760b/hostname",
	        "HostsPath": "/var/lib/docker/containers/a25be454b6b6755f669ac6ad734c4c39a3256155d18fbf1593189c0c5d90760b/hosts",
	        "LogPath": "/var/lib/docker/containers/a25be454b6b6755f669ac6ad734c4c39a3256155d18fbf1593189c0c5d90760b/a25be454b6b6755f669ac6ad734c4c39a3256155d18fbf1593189c0c5d90760b-json.log",
	        "Name": "/addons-695107",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-695107:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-695107",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "a25be454b6b6755f669ac6ad734c4c39a3256155d18fbf1593189c0c5d90760b",
	                "LowerDir": "/var/lib/docker/overlay2/e00afe5cfccc6e8f90fd059d2fba050a5df4e4f0d2ecce470a37146e2175366f-init/diff:/var/lib/docker/overlay2/29727d664a8119dcd8d22d923cfdfa7d86f99088879bf2a113d907b51116eb38/diff",
	                "MergedDir": "/var/lib/docker/overlay2/e00afe5cfccc6e8f90fd059d2fba050a5df4e4f0d2ecce470a37146e2175366f/merged",
	                "UpperDir": "/var/lib/docker/overlay2/e00afe5cfccc6e8f90fd059d2fba050a5df4e4f0d2ecce470a37146e2175366f/diff",
	                "WorkDir": "/var/lib/docker/overlay2/e00afe5cfccc6e8f90fd059d2fba050a5df4e4f0d2ecce470a37146e2175366f/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-695107",
	                "Source": "/var/lib/docker/volumes/addons-695107/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-695107",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-695107",
	                "name.minikube.sigs.k8s.io": "addons-695107",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "700334be7342dcd9e2d5ec85ed0e268a4b88bcf2909b690c046ce972efebad24",
	            "SandboxKey": "/var/run/docker/netns/700334be7342",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33143"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33144"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33147"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33145"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33146"
	                    }
	                ]
	            },
	            "Networks": {
	                "addons-695107": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "86529ba95000ece5f19f992e0cff5b1ae18c2ea573e6a29bf2ac9f27693ae01b",
	                    "EndpointID": "4be3ee9ef435aa25872287041841053f86551477b6e548b82bbea555d3fab478",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "MacAddress": "be:38:90:54:d8:d8",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-695107",
	                        "a25be454b6b6"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-695107 -n addons-695107
helpers_test.go:253: <<< TestAddons/parallel/Headlamp FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestAddons/parallel/Headlamp]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p addons-695107 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p addons-695107 logs -n 25: (1.161847354s)
helpers_test.go:261: TestAddons/parallel/Headlamp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-096016 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-096016   │ jenkins │ v1.37.0 │ 17 Dec 25 19:24 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 17 Dec 25 19:24 UTC │ 17 Dec 25 19:24 UTC │
	│ delete  │ -p download-only-096016                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-096016   │ jenkins │ v1.37.0 │ 17 Dec 25 19:24 UTC │ 17 Dec 25 19:24 UTC │
	│ start   │ -o=json --download-only -p download-only-266209 --force --alsologtostderr --kubernetes-version=v1.34.3 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-266209   │ jenkins │ v1.37.0 │ 17 Dec 25 19:24 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 17 Dec 25 19:24 UTC │ 17 Dec 25 19:24 UTC │
	│ delete  │ -p download-only-266209                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-266209   │ jenkins │ v1.37.0 │ 17 Dec 25 19:24 UTC │ 17 Dec 25 19:24 UTC │
	│ start   │ -o=json --download-only -p download-only-371882 --force --alsologtostderr --kubernetes-version=v1.35.0-rc.1 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                           │ download-only-371882   │ jenkins │ v1.37.0 │ 17 Dec 25 19:24 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 17 Dec 25 19:24 UTC │ 17 Dec 25 19:24 UTC │
	│ delete  │ -p download-only-371882                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-371882   │ jenkins │ v1.37.0 │ 17 Dec 25 19:24 UTC │ 17 Dec 25 19:24 UTC │
	│ delete  │ -p download-only-096016                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-096016   │ jenkins │ v1.37.0 │ 17 Dec 25 19:24 UTC │ 17 Dec 25 19:24 UTC │
	│ delete  │ -p download-only-266209                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-266209   │ jenkins │ v1.37.0 │ 17 Dec 25 19:24 UTC │ 17 Dec 25 19:24 UTC │
	│ delete  │ -p download-only-371882                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-371882   │ jenkins │ v1.37.0 │ 17 Dec 25 19:24 UTC │ 17 Dec 25 19:24 UTC │
	│ start   │ --download-only -p download-docker-902104 --alsologtostderr --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                                                                    │ download-docker-902104 │ jenkins │ v1.37.0 │ 17 Dec 25 19:24 UTC │                     │
	│ delete  │ -p download-docker-902104                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-docker-902104 │ jenkins │ v1.37.0 │ 17 Dec 25 19:24 UTC │ 17 Dec 25 19:24 UTC │
	│ start   │ --download-only -p binary-mirror-277393 --alsologtostderr --binary-mirror http://127.0.0.1:41979 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-277393   │ jenkins │ v1.37.0 │ 17 Dec 25 19:24 UTC │                     │
	│ delete  │ -p binary-mirror-277393                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-277393   │ jenkins │ v1.37.0 │ 17 Dec 25 19:24 UTC │ 17 Dec 25 19:24 UTC │
	│ addons  │ enable dashboard -p addons-695107                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-695107          │ jenkins │ v1.37.0 │ 17 Dec 25 19:24 UTC │                     │
	│ addons  │ disable dashboard -p addons-695107                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-695107          │ jenkins │ v1.37.0 │ 17 Dec 25 19:24 UTC │                     │
	│ start   │ -p addons-695107 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-695107          │ jenkins │ v1.37.0 │ 17 Dec 25 19:24 UTC │ 17 Dec 25 19:25 UTC │
	│ addons  │ addons-695107 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-695107          │ jenkins │ v1.37.0 │ 17 Dec 25 19:25 UTC │                     │
	│ addons  │ addons-695107 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-695107          │ jenkins │ v1.37.0 │ 17 Dec 25 19:26 UTC │                     │
	│ addons  │ enable headlamp -p addons-695107 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-695107          │ jenkins │ v1.37.0 │ 17 Dec 25 19:26 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/17 19:24:24
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1217 19:24:24.126785  377556 out.go:360] Setting OutFile to fd 1 ...
	I1217 19:24:24.126878  377556 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 19:24:24.126883  377556 out.go:374] Setting ErrFile to fd 2...
	I1217 19:24:24.126887  377556 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 19:24:24.127086  377556 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22186-372245/.minikube/bin
	I1217 19:24:24.127642  377556 out.go:368] Setting JSON to false
	I1217 19:24:24.128538  377556 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":4015,"bootTime":1765995449,"procs":201,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1217 19:24:24.128603  377556 start.go:143] virtualization: kvm guest
	I1217 19:24:24.130321  377556 out.go:179] * [addons-695107] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1217 19:24:24.131751  377556 notify.go:221] Checking for updates...
	I1217 19:24:24.131762  377556 out.go:179]   - MINIKUBE_LOCATION=22186
	I1217 19:24:24.133167  377556 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 19:24:24.134423  377556 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22186-372245/kubeconfig
	I1217 19:24:24.135417  377556 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22186-372245/.minikube
	I1217 19:24:24.136394  377556 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1217 19:24:24.137281  377556 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 19:24:24.138442  377556 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 19:24:24.161791  377556 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1217 19:24:24.161958  377556 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 19:24:24.215528  377556 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:49 SystemTime:2025-12-17 19:24:24.20631484 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1217 19:24:24.215629  377556 docker.go:319] overlay module found
	I1217 19:24:24.217285  377556 out.go:179] * Using the docker driver based on user configuration
	I1217 19:24:24.218407  377556 start.go:309] selected driver: docker
	I1217 19:24:24.218424  377556 start.go:927] validating driver "docker" against <nil>
	I1217 19:24:24.218438  377556 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 19:24:24.219003  377556 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 19:24:24.275175  377556 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:49 SystemTime:2025-12-17 19:24:24.265693812 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1217 19:24:24.275377  377556 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1217 19:24:24.275585  377556 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1217 19:24:24.277232  377556 out.go:179] * Using Docker driver with root privileges
	I1217 19:24:24.278459  377556 cni.go:84] Creating CNI manager for ""
	I1217 19:24:24.278522  377556 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1217 19:24:24.278534  377556 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1217 19:24:24.278611  377556 start.go:353] cluster config:
	{Name:addons-695107 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:addons-695107 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I1217 19:24:24.279911  377556 out.go:179] * Starting "addons-695107" primary control-plane node in "addons-695107" cluster
	I1217 19:24:24.281018  377556 cache.go:134] Beginning downloading kic base image for docker with crio
	I1217 19:24:24.282282  377556 out.go:179] * Pulling base image v0.0.48-1765966054-22186 ...
	I1217 19:24:24.283468  377556 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1217 19:24:24.283503  377556 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22186-372245/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4
	I1217 19:24:24.283511  377556 cache.go:65] Caching tarball of preloaded images
	I1217 19:24:24.283554  377556 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 in local docker daemon
	I1217 19:24:24.283608  377556 preload.go:238] Found /home/jenkins/minikube-integration/22186-372245/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1217 19:24:24.283620  377556 cache.go:68] Finished verifying existence of preloaded tar for v1.34.3 on crio
	I1217 19:24:24.284034  377556 profile.go:143] Saving config to /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/addons-695107/config.json ...
	I1217 19:24:24.284064  377556 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/addons-695107/config.json: {Name:mka6729ae10fb93e1afc67a6d287fd4103077927 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 19:24:24.300139  377556 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 to local cache
	I1217 19:24:24.300290  377556 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 in local cache directory
	I1217 19:24:24.300311  377556 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 in local cache directory, skipping pull
	I1217 19:24:24.300321  377556 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 exists in cache, skipping pull
	I1217 19:24:24.300328  377556 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 as a tarball
	I1217 19:24:24.300335  377556 cache.go:176] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 from local cache
	I1217 19:24:37.335348  377556 cache.go:178] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 from cached tarball
	I1217 19:24:37.335399  377556 cache.go:243] Successfully downloaded all kic artifacts
	I1217 19:24:37.335462  377556 start.go:360] acquireMachinesLock for addons-695107: {Name:mkaa3d9b802c6da07df7c3f5fae85058f2767d38 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 19:24:37.335619  377556 start.go:364] duration metric: took 127.98µs to acquireMachinesLock for "addons-695107"
	I1217 19:24:37.335678  377556 start.go:93] Provisioning new machine with config: &{Name:addons-695107 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:addons-695107 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1217 19:24:37.335765  377556 start.go:125] createHost starting for "" (driver="docker")
	I1217 19:24:37.338497  377556 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1217 19:24:37.338809  377556 start.go:159] libmachine.API.Create for "addons-695107" (driver="docker")
	I1217 19:24:37.338854  377556 client.go:173] LocalClient.Create starting
	I1217 19:24:37.338973  377556 main.go:143] libmachine: Creating CA: /home/jenkins/minikube-integration/22186-372245/.minikube/certs/ca.pem
	I1217 19:24:37.430093  377556 main.go:143] libmachine: Creating client certificate: /home/jenkins/minikube-integration/22186-372245/.minikube/certs/cert.pem
	I1217 19:24:37.483306  377556 cli_runner.go:164] Run: docker network inspect addons-695107 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1217 19:24:37.501673  377556 cli_runner.go:211] docker network inspect addons-695107 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1217 19:24:37.501781  377556 network_create.go:284] running [docker network inspect addons-695107] to gather additional debugging logs...
	I1217 19:24:37.501808  377556 cli_runner.go:164] Run: docker network inspect addons-695107
	W1217 19:24:37.519300  377556 cli_runner.go:211] docker network inspect addons-695107 returned with exit code 1
	I1217 19:24:37.519346  377556 network_create.go:287] error running [docker network inspect addons-695107]: docker network inspect addons-695107: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-695107 not found
	I1217 19:24:37.519368  377556 network_create.go:289] output of [docker network inspect addons-695107]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-695107 not found
	
	** /stderr **
	I1217 19:24:37.519506  377556 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1217 19:24:37.537532  377556 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001f964b0}
	I1217 19:24:37.537575  377556 network_create.go:124] attempt to create docker network addons-695107 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1217 19:24:37.537631  377556 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-695107 addons-695107
	I1217 19:24:37.586320  377556 network_create.go:108] docker network addons-695107 192.168.49.0/24 created
	I1217 19:24:37.586356  377556 kic.go:121] calculated static IP "192.168.49.2" for the "addons-695107" container
	I1217 19:24:37.586437  377556 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1217 19:24:37.603910  377556 cli_runner.go:164] Run: docker volume create addons-695107 --label name.minikube.sigs.k8s.io=addons-695107 --label created_by.minikube.sigs.k8s.io=true
	I1217 19:24:37.622670  377556 oci.go:103] Successfully created a docker volume addons-695107
	I1217 19:24:37.622749  377556 cli_runner.go:164] Run: docker run --rm --name addons-695107-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-695107 --entrypoint /usr/bin/test -v addons-695107:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 -d /var/lib
	I1217 19:24:43.305993  377556 cli_runner.go:217] Completed: docker run --rm --name addons-695107-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-695107 --entrypoint /usr/bin/test -v addons-695107:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 -d /var/lib: (5.683199737s)
	I1217 19:24:43.306025  377556 oci.go:107] Successfully prepared a docker volume addons-695107
	I1217 19:24:43.306100  377556 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1217 19:24:43.306117  377556 kic.go:194] Starting extracting preloaded images to volume ...
	I1217 19:24:43.306215  377556 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22186-372245/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-695107:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 -I lz4 -xf /preloaded.tar -C /extractDir
	I1217 19:24:47.128824  377556 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22186-372245/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-695107:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 -I lz4 -xf /preloaded.tar -C /extractDir: (3.822557769s)
	I1217 19:24:47.128873  377556 kic.go:203] duration metric: took 3.822753031s to extract preloaded images to volume ...
	W1217 19:24:47.128958  377556 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1217 19:24:47.128998  377556 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1217 19:24:47.129038  377556 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1217 19:24:47.184052  377556 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-695107 --name addons-695107 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-695107 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-695107 --network addons-695107 --ip 192.168.49.2 --volume addons-695107:/var --security-opt apparmor=unconfined --memory=4096mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0
	I1217 19:24:47.462500  377556 cli_runner.go:164] Run: docker container inspect addons-695107 --format={{.State.Running}}
	I1217 19:24:47.481698  377556 cli_runner.go:164] Run: docker container inspect addons-695107 --format={{.State.Status}}
	I1217 19:24:47.499771  377556 cli_runner.go:164] Run: docker exec addons-695107 stat /var/lib/dpkg/alternatives/iptables
	I1217 19:24:47.547992  377556 oci.go:144] the created container "addons-695107" has a running status.
	I1217 19:24:47.548034  377556 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22186-372245/.minikube/machines/addons-695107/id_rsa...
	I1217 19:24:47.722949  377556 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22186-372245/.minikube/machines/addons-695107/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1217 19:24:47.749200  377556 cli_runner.go:164] Run: docker container inspect addons-695107 --format={{.State.Status}}
	I1217 19:24:47.777795  377556 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1217 19:24:47.777846  377556 kic_runner.go:114] Args: [docker exec --privileged addons-695107 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1217 19:24:47.829734  377556 cli_runner.go:164] Run: docker container inspect addons-695107 --format={{.State.Status}}
	I1217 19:24:47.851229  377556 machine.go:94] provisionDockerMachine start ...
	I1217 19:24:47.851327  377556 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-695107
	I1217 19:24:47.872171  377556 main.go:143] libmachine: Using SSH client type: native
	I1217 19:24:47.872477  377556 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33143 <nil> <nil>}
	I1217 19:24:47.872499  377556 main.go:143] libmachine: About to run SSH command:
	hostname
	I1217 19:24:48.017223  377556 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-695107
	
	I1217 19:24:48.017261  377556 ubuntu.go:182] provisioning hostname "addons-695107"
	I1217 19:24:48.017333  377556 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-695107
	I1217 19:24:48.036925  377556 main.go:143] libmachine: Using SSH client type: native
	I1217 19:24:48.037410  377556 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33143 <nil> <nil>}
	I1217 19:24:48.037440  377556 main.go:143] libmachine: About to run SSH command:
	sudo hostname addons-695107 && echo "addons-695107" | sudo tee /etc/hostname
	I1217 19:24:48.194844  377556 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-695107
	
	I1217 19:24:48.194956  377556 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-695107
	I1217 19:24:48.213589  377556 main.go:143] libmachine: Using SSH client type: native
	I1217 19:24:48.213961  377556 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33143 <nil> <nil>}
	I1217 19:24:48.213990  377556 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-695107' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-695107/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-695107' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1217 19:24:48.359274  377556 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1217 19:24:48.359308  377556 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22186-372245/.minikube CaCertPath:/home/jenkins/minikube-integration/22186-372245/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22186-372245/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22186-372245/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22186-372245/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22186-372245/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22186-372245/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22186-372245/.minikube}
	I1217 19:24:48.359364  377556 ubuntu.go:190] setting up certificates
	I1217 19:24:48.359382  377556 provision.go:84] configureAuth start
	I1217 19:24:48.359447  377556 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-695107
	I1217 19:24:48.378364  377556 provision.go:143] copyHostCerts
	I1217 19:24:48.378440  377556 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22186-372245/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22186-372245/.minikube/ca.pem (1082 bytes)
	I1217 19:24:48.378618  377556 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22186-372245/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22186-372245/.minikube/cert.pem (1123 bytes)
	I1217 19:24:48.378698  377556 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22186-372245/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22186-372245/.minikube/key.pem (1675 bytes)
	I1217 19:24:48.378764  377556 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22186-372245/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22186-372245/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22186-372245/.minikube/certs/ca-key.pem org=jenkins.addons-695107 san=[127.0.0.1 192.168.49.2 addons-695107 localhost minikube]
	I1217 19:24:48.420807  377556 provision.go:177] copyRemoteCerts
	I1217 19:24:48.420872  377556 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1217 19:24:48.420918  377556 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-695107
	I1217 19:24:48.439742  377556 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22186-372245/.minikube/machines/addons-695107/id_rsa Username:docker}
	I1217 19:24:48.542652  377556 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1217 19:24:48.562286  377556 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1217 19:24:48.579495  377556 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1217 19:24:48.596512  377556 provision.go:87] duration metric: took 237.115873ms to configureAuth
	I1217 19:24:48.596545  377556 ubuntu.go:206] setting minikube options for container-runtime
	I1217 19:24:48.596737  377556 config.go:182] Loaded profile config "addons-695107": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 19:24:48.596857  377556 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-695107
	I1217 19:24:48.615680  377556 main.go:143] libmachine: Using SSH client type: native
	I1217 19:24:48.615923  377556 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33143 <nil> <nil>}
	I1217 19:24:48.615946  377556 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1217 19:24:48.908292  377556 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1217 19:24:48.908326  377556 machine.go:97] duration metric: took 1.057069076s to provisionDockerMachine
	I1217 19:24:48.908342  377556 client.go:176] duration metric: took 11.56947608s to LocalClient.Create
	I1217 19:24:48.908367  377556 start.go:167] duration metric: took 11.569560109s to libmachine.API.Create "addons-695107"
	I1217 19:24:48.908378  377556 start.go:293] postStartSetup for "addons-695107" (driver="docker")
	I1217 19:24:48.908398  377556 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1217 19:24:48.908491  377556 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1217 19:24:48.908544  377556 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-695107
	I1217 19:24:48.927576  377556 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22186-372245/.minikube/machines/addons-695107/id_rsa Username:docker}
	I1217 19:24:49.031768  377556 ssh_runner.go:195] Run: cat /etc/os-release
	I1217 19:24:49.035634  377556 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1217 19:24:49.035665  377556 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1217 19:24:49.035683  377556 filesync.go:126] Scanning /home/jenkins/minikube-integration/22186-372245/.minikube/addons for local assets ...
	I1217 19:24:49.035765  377556 filesync.go:126] Scanning /home/jenkins/minikube-integration/22186-372245/.minikube/files for local assets ...
	I1217 19:24:49.035805  377556 start.go:296] duration metric: took 127.418735ms for postStartSetup
	I1217 19:24:49.036223  377556 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-695107
	I1217 19:24:49.054364  377556 profile.go:143] Saving config to /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/addons-695107/config.json ...
	I1217 19:24:49.054674  377556 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 19:24:49.054736  377556 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-695107
	I1217 19:24:49.074658  377556 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22186-372245/.minikube/machines/addons-695107/id_rsa Username:docker}
	I1217 19:24:49.173512  377556 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1217 19:24:49.178254  377556 start.go:128] duration metric: took 11.842472013s to createHost
	I1217 19:24:49.178279  377556 start.go:83] releasing machines lock for "addons-695107", held for 11.84264303s
	I1217 19:24:49.178344  377556 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-695107
	I1217 19:24:49.196721  377556 ssh_runner.go:195] Run: cat /version.json
	I1217 19:24:49.196792  377556 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-695107
	I1217 19:24:49.196800  377556 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1217 19:24:49.196933  377556 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-695107
	I1217 19:24:49.215979  377556 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22186-372245/.minikube/machines/addons-695107/id_rsa Username:docker}
	I1217 19:24:49.216293  377556 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22186-372245/.minikube/machines/addons-695107/id_rsa Username:docker}
	I1217 19:24:49.367111  377556 ssh_runner.go:195] Run: systemctl --version
	I1217 19:24:49.373648  377556 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1217 19:24:49.409219  377556 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1217 19:24:49.414155  377556 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1217 19:24:49.414231  377556 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1217 19:24:49.441204  377556 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1217 19:24:49.441239  377556 start.go:496] detecting cgroup driver to use...
	I1217 19:24:49.441282  377556 detect.go:190] detected "systemd" cgroup driver on host os
	I1217 19:24:49.441336  377556 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1217 19:24:49.458347  377556 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1217 19:24:49.471281  377556 docker.go:218] disabling cri-docker service (if available) ...
	I1217 19:24:49.471335  377556 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1217 19:24:49.488924  377556 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1217 19:24:49.507099  377556 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1217 19:24:49.589374  377556 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1217 19:24:49.676445  377556 docker.go:234] disabling docker service ...
	I1217 19:24:49.676509  377556 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1217 19:24:49.695760  377556 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1217 19:24:49.708957  377556 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1217 19:24:49.795271  377556 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1217 19:24:49.878164  377556 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1217 19:24:49.890692  377556 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1217 19:24:49.904628  377556 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1217 19:24:49.904679  377556 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 19:24:49.914534  377556 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1217 19:24:49.914601  377556 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 19:24:49.923632  377556 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 19:24:49.932217  377556 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 19:24:49.940812  377556 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1217 19:24:49.948972  377556 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 19:24:49.958499  377556 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 19:24:49.972771  377556 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 19:24:49.981741  377556 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1217 19:24:49.989429  377556 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1217 19:24:49.997504  377556 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 19:24:50.075968  377556 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1217 19:24:50.211126  377556 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1217 19:24:50.211227  377556 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1217 19:24:50.215194  377556 start.go:564] Will wait 60s for crictl version
	I1217 19:24:50.215247  377556 ssh_runner.go:195] Run: which crictl
	I1217 19:24:50.218819  377556 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1217 19:24:50.244459  377556 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1217 19:24:50.244538  377556 ssh_runner.go:195] Run: crio --version
	I1217 19:24:50.273030  377556 ssh_runner.go:195] Run: crio --version
	I1217 19:24:50.306352  377556 out.go:179] * Preparing Kubernetes v1.34.3 on CRI-O 1.34.3 ...
	I1217 19:24:50.308153  377556 cli_runner.go:164] Run: docker network inspect addons-695107 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1217 19:24:50.325215  377556 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1217 19:24:50.329413  377556 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 19:24:50.339820  377556 kubeadm.go:884] updating cluster {Name:addons-695107 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:addons-695107 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1217 19:24:50.339961  377556 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1217 19:24:50.340021  377556 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 19:24:50.370798  377556 crio.go:514] all images are preloaded for cri-o runtime.
	I1217 19:24:50.370820  377556 crio.go:433] Images already preloaded, skipping extraction
	I1217 19:24:50.370866  377556 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 19:24:50.398361  377556 crio.go:514] all images are preloaded for cri-o runtime.
	I1217 19:24:50.398386  377556 cache_images.go:86] Images are preloaded, skipping loading
	I1217 19:24:50.398394  377556 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.34.3 crio true true} ...
	I1217 19:24:50.398506  377556 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-695107 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.3 ClusterName:addons-695107 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1217 19:24:50.398589  377556 ssh_runner.go:195] Run: crio config
	I1217 19:24:50.445810  377556 cni.go:84] Creating CNI manager for ""
	I1217 19:24:50.445834  377556 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1217 19:24:50.445851  377556 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1217 19:24:50.445880  377556 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-695107 NodeName:addons-695107 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernet
es/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1217 19:24:50.446028  377556 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-695107"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1217 19:24:50.446119  377556 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.3
	I1217 19:24:50.454668  377556 binaries.go:51] Found k8s binaries, skipping transfer
	I1217 19:24:50.454757  377556 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1217 19:24:50.462996  377556 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1217 19:24:50.475974  377556 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1217 19:24:50.491924  377556 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1217 19:24:50.505161  377556 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1217 19:24:50.508992  377556 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 19:24:50.519852  377556 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 19:24:50.601686  377556 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 19:24:50.625594  377556 certs.go:69] Setting up /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/addons-695107 for IP: 192.168.49.2
	I1217 19:24:50.625622  377556 certs.go:195] generating shared ca certs ...
	I1217 19:24:50.625645  377556 certs.go:227] acquiring lock for ca certs: {Name:mk6c0a4a99609de13fb0b54aca94f9165cc7856c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 19:24:50.625813  377556 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/22186-372245/.minikube/ca.key
	I1217 19:24:50.784108  377556 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22186-372245/.minikube/ca.crt ...
	I1217 19:24:50.784153  377556 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-372245/.minikube/ca.crt: {Name:mka8faad6b0d9cfe9eff735b660a85cc4b3def2f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 19:24:50.784356  377556 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22186-372245/.minikube/ca.key ...
	I1217 19:24:50.784368  377556 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-372245/.minikube/ca.key: {Name:mk1599aec95e8473475cf64374004073927776cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 19:24:50.784457  377556 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22186-372245/.minikube/proxy-client-ca.key
	I1217 19:24:50.814125  377556 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22186-372245/.minikube/proxy-client-ca.crt ...
	I1217 19:24:50.814182  377556 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-372245/.minikube/proxy-client-ca.crt: {Name:mk756fb6e2f220465394bbd8d88a3fc31836c1bb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 19:24:50.814378  377556 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22186-372245/.minikube/proxy-client-ca.key ...
	I1217 19:24:50.814391  377556 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-372245/.minikube/proxy-client-ca.key: {Name:mk271354f73027bd48ba21a5a5e9a21db166cab2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 19:24:50.814495  377556 certs.go:257] generating profile certs ...
	I1217 19:24:50.814563  377556 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/addons-695107/client.key
	I1217 19:24:50.814579  377556 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/addons-695107/client.crt with IP's: []
	I1217 19:24:50.879385  377556 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/addons-695107/client.crt ...
	I1217 19:24:50.879427  377556 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/addons-695107/client.crt: {Name:mkca787255fc48452b56c2a6c08bfd95dd7307db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 19:24:50.879626  377556 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/addons-695107/client.key ...
	I1217 19:24:50.879643  377556 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/addons-695107/client.key: {Name:mk3392fc258ab3f5eb01658f05c7245392cb66a8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 19:24:50.879720  377556 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/addons-695107/apiserver.key.6c8e3526
	I1217 19:24:50.879742  377556 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/addons-695107/apiserver.crt.6c8e3526 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1217 19:24:50.974365  377556 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/addons-695107/apiserver.crt.6c8e3526 ...
	I1217 19:24:50.974403  377556 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/addons-695107/apiserver.crt.6c8e3526: {Name:mke9c0d6fff2cdc1fc4c7f9a670a76f1aa124df8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 19:24:50.974592  377556 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/addons-695107/apiserver.key.6c8e3526 ...
	I1217 19:24:50.974607  377556 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/addons-695107/apiserver.key.6c8e3526: {Name:mkb5d41c2e17e562ad6a3d630d01716c086df6c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 19:24:50.974688  377556 certs.go:382] copying /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/addons-695107/apiserver.crt.6c8e3526 -> /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/addons-695107/apiserver.crt
	I1217 19:24:50.974804  377556 certs.go:386] copying /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/addons-695107/apiserver.key.6c8e3526 -> /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/addons-695107/apiserver.key
	I1217 19:24:50.974873  377556 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/addons-695107/proxy-client.key
	I1217 19:24:50.974896  377556 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/addons-695107/proxy-client.crt with IP's: []
	I1217 19:24:51.002226  377556 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/addons-695107/proxy-client.crt ...
	I1217 19:24:51.002264  377556 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/addons-695107/proxy-client.crt: {Name:mk5b0366c4e469d7eeda8c677bd7e7fe88fcde19 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 19:24:51.002454  377556 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/addons-695107/proxy-client.key ...
	I1217 19:24:51.002469  377556 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/addons-695107/proxy-client.key: {Name:mkae7871a9cb882df4155f0d4ec3bef895fd8530 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 19:24:51.002662  377556 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-372245/.minikube/certs/ca-key.pem (1675 bytes)
	I1217 19:24:51.002702  377556 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-372245/.minikube/certs/ca.pem (1082 bytes)
	I1217 19:24:51.002735  377556 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-372245/.minikube/certs/cert.pem (1123 bytes)
	I1217 19:24:51.002774  377556 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-372245/.minikube/certs/key.pem (1675 bytes)
	I1217 19:24:51.003524  377556 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1217 19:24:51.022376  377556 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1217 19:24:51.041251  377556 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1217 19:24:51.059509  377556 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1217 19:24:51.077617  377556 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/addons-695107/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1217 19:24:51.095597  377556 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/addons-695107/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1217 19:24:51.112586  377556 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/addons-695107/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1217 19:24:51.130434  377556 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/addons-695107/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1217 19:24:51.148409  377556 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 19:24:51.168994  377556 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1217 19:24:51.182483  377556 ssh_runner.go:195] Run: openssl version
	I1217 19:24:51.188793  377556 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1217 19:24:51.196803  377556 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1217 19:24:51.207139  377556 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1217 19:24:51.210920  377556 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 19:24 /usr/share/ca-certificates/minikubeCA.pem
	I1217 19:24:51.210977  377556 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1217 19:24:51.245501  377556 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1217 19:24:51.253723  377556 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1217 19:24:51.261603  377556 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1217 19:24:51.265182  377556 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1217 19:24:51.265247  377556 kubeadm.go:401] StartCluster: {Name:addons-695107 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:addons-695107 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 19:24:51.265390  377556 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1217 19:24:51.265452  377556 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 19:24:51.293126  377556 cri.go:89] found id: ""
	I1217 19:24:51.293200  377556 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1217 19:24:51.301544  377556 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1217 19:24:51.309824  377556 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1217 19:24:51.309898  377556 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1217 19:24:51.318727  377556 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1217 19:24:51.318747  377556 kubeadm.go:158] found existing configuration files:
	
	I1217 19:24:51.318789  377556 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1217 19:24:51.326961  377556 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1217 19:24:51.327034  377556 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1217 19:24:51.334664  377556 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1217 19:24:51.343069  377556 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1217 19:24:51.343189  377556 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1217 19:24:51.350569  377556 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1217 19:24:51.358457  377556 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1217 19:24:51.358527  377556 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1217 19:24:51.366337  377556 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1217 19:24:51.373916  377556 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1217 19:24:51.373983  377556 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1217 19:24:51.381336  377556 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1217 19:24:51.448229  377556 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1045-gcp\n", err: exit status 1
	I1217 19:24:51.510191  377556 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1217 19:25:00.926187  377556 kubeadm.go:319] [init] Using Kubernetes version: v1.34.3
	I1217 19:25:00.926245  377556 kubeadm.go:319] [preflight] Running pre-flight checks
	I1217 19:25:00.926368  377556 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1217 19:25:00.926431  377556 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1045-gcp
	I1217 19:25:00.926462  377556 kubeadm.go:319] OS: Linux
	I1217 19:25:00.926530  377556 kubeadm.go:319] CGROUPS_CPU: enabled
	I1217 19:25:00.926604  377556 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1217 19:25:00.926674  377556 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1217 19:25:00.926742  377556 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1217 19:25:00.926807  377556 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1217 19:25:00.926879  377556 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1217 19:25:00.926950  377556 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1217 19:25:00.927023  377556 kubeadm.go:319] CGROUPS_IO: enabled
	I1217 19:25:00.927143  377556 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1217 19:25:00.927236  377556 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1217 19:25:00.927315  377556 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1217 19:25:00.927374  377556 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1217 19:25:00.930139  377556 out.go:252]   - Generating certificates and keys ...
	I1217 19:25:00.930216  377556 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1217 19:25:00.930271  377556 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1217 19:25:00.930325  377556 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1217 19:25:00.930408  377556 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1217 19:25:00.930500  377556 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1217 19:25:00.930577  377556 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1217 19:25:00.930667  377556 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1217 19:25:00.930806  377556 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-695107 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1217 19:25:00.930876  377556 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1217 19:25:00.931013  377556 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-695107 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1217 19:25:00.931163  377556 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1217 19:25:00.931245  377556 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1217 19:25:00.931305  377556 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1217 19:25:00.931394  377556 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1217 19:25:00.931447  377556 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1217 19:25:00.931492  377556 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1217 19:25:00.931538  377556 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1217 19:25:00.931590  377556 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1217 19:25:00.931633  377556 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1217 19:25:00.931698  377556 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1217 19:25:00.931751  377556 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1217 19:25:00.932955  377556 out.go:252]   - Booting up control plane ...
	I1217 19:25:00.933036  377556 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1217 19:25:00.933131  377556 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1217 19:25:00.933212  377556 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1217 19:25:00.933336  377556 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1217 19:25:00.933447  377556 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1217 19:25:00.933531  377556 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1217 19:25:00.933604  377556 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1217 19:25:00.933673  377556 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1217 19:25:00.933818  377556 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1217 19:25:00.933948  377556 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1217 19:25:00.934041  377556 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 501.891231ms
	I1217 19:25:00.934184  377556 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1217 19:25:00.934298  377556 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1217 19:25:00.934423  377556 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1217 19:25:00.934525  377556 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1217 19:25:00.934629  377556 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.005586878s
	I1217 19:25:00.934716  377556 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.149012348s
	I1217 19:25:00.934803  377556 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.001316036s
	I1217 19:25:00.934928  377556 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1217 19:25:00.935048  377556 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1217 19:25:00.935107  377556 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1217 19:25:00.935292  377556 kubeadm.go:319] [mark-control-plane] Marking the node addons-695107 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1217 19:25:00.935388  377556 kubeadm.go:319] [bootstrap-token] Using token: qz59t1.jmxpy6ch9p6pe8xc
	I1217 19:25:00.936700  377556 out.go:252]   - Configuring RBAC rules ...
	I1217 19:25:00.936803  377556 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1217 19:25:00.936902  377556 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1217 19:25:00.937124  377556 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1217 19:25:00.937359  377556 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1217 19:25:00.937524  377556 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1217 19:25:00.937664  377556 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1217 19:25:00.937869  377556 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1217 19:25:00.937940  377556 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1217 19:25:00.938010  377556 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1217 19:25:00.938021  377556 kubeadm.go:319] 
	I1217 19:25:00.938123  377556 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1217 19:25:00.938135  377556 kubeadm.go:319] 
	I1217 19:25:00.938244  377556 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1217 19:25:00.938265  377556 kubeadm.go:319] 
	I1217 19:25:00.938308  377556 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1217 19:25:00.938400  377556 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1217 19:25:00.938484  377556 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1217 19:25:00.938494  377556 kubeadm.go:319] 
	I1217 19:25:00.938573  377556 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1217 19:25:00.938586  377556 kubeadm.go:319] 
	I1217 19:25:00.938658  377556 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1217 19:25:00.938668  377556 kubeadm.go:319] 
	I1217 19:25:00.938744  377556 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1217 19:25:00.938858  377556 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1217 19:25:00.938946  377556 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1217 19:25:00.938956  377556 kubeadm.go:319] 
	I1217 19:25:00.939113  377556 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1217 19:25:00.939224  377556 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1217 19:25:00.939235  377556 kubeadm.go:319] 
	I1217 19:25:00.939366  377556 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token qz59t1.jmxpy6ch9p6pe8xc \
	I1217 19:25:00.939491  377556 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:8ef867ecc15c7bd9eb9f87ba84e4b5e1f9c90bbe1fbebab60bd7b5b08cd9129f \
	I1217 19:25:00.939532  377556 kubeadm.go:319] 	--control-plane 
	I1217 19:25:00.939553  377556 kubeadm.go:319] 
	I1217 19:25:00.939659  377556 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1217 19:25:00.939682  377556 kubeadm.go:319] 
	I1217 19:25:00.939803  377556 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token qz59t1.jmxpy6ch9p6pe8xc \
	I1217 19:25:00.939966  377556 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:8ef867ecc15c7bd9eb9f87ba84e4b5e1f9c90bbe1fbebab60bd7b5b08cd9129f 
	I1217 19:25:00.939988  377556 cni.go:84] Creating CNI manager for ""
	I1217 19:25:00.940000  377556 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1217 19:25:00.941573  377556 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1217 19:25:00.942627  377556 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1217 19:25:00.947061  377556 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.3/kubectl ...
	I1217 19:25:00.947087  377556 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2620 bytes)
	I1217 19:25:00.961366  377556 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1217 19:25:01.168583  377556 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1217 19:25:01.168674  377556 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 19:25:01.168674  377556 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-695107 minikube.k8s.io/updated_at=2025_12_17T19_25_01_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=2e96f676eb7e96389e85fe0658a4ede4c4ba6924 minikube.k8s.io/name=addons-695107 minikube.k8s.io/primary=true
	I1217 19:25:01.178515  377556 ops.go:34] apiserver oom_adj: -16
	I1217 19:25:01.267311  377556 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 19:25:01.767932  377556 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 19:25:02.268089  377556 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 19:25:02.768056  377556 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 19:25:03.268102  377556 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 19:25:03.768303  377556 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 19:25:04.268304  377556 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 19:25:04.768278  377556 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 19:25:05.268167  377556 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 19:25:05.768314  377556 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 19:25:05.836097  377556 kubeadm.go:1114] duration metric: took 4.66747042s to wait for elevateKubeSystemPrivileges
	I1217 19:25:05.836142  377556 kubeadm.go:403] duration metric: took 14.570903914s to StartCluster
	I1217 19:25:05.836172  377556 settings.go:142] acquiring lock: {Name:mk01c60672ff2b8f50b037d6096a0a4590636830 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 19:25:05.836291  377556 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22186-372245/kubeconfig
	I1217 19:25:05.836690  377556 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-372245/kubeconfig: {Name:mkbe8926b9014d2af611aee93b1188b72880b6c1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 19:25:05.836915  377556 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1217 19:25:05.836935  377556 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1217 19:25:05.836995  377556 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1217 19:25:05.837155  377556 addons.go:70] Setting yakd=true in profile "addons-695107"
	I1217 19:25:05.837165  377556 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-695107"
	I1217 19:25:05.837183  377556 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-695107"
	I1217 19:25:05.837188  377556 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-695107"
	I1217 19:25:05.837212  377556 addons.go:70] Setting volumesnapshots=true in profile "addons-695107"
	I1217 19:25:05.837221  377556 config.go:182] Loaded profile config "addons-695107": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 19:25:05.837228  377556 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-695107"
	I1217 19:25:05.837229  377556 addons.go:239] Setting addon volumesnapshots=true in "addons-695107"
	I1217 19:25:05.837237  377556 addons.go:70] Setting registry-creds=true in profile "addons-695107"
	I1217 19:25:05.837264  377556 host.go:66] Checking if "addons-695107" exists ...
	I1217 19:25:05.837265  377556 host.go:66] Checking if "addons-695107" exists ...
	I1217 19:25:05.837175  377556 addons.go:239] Setting addon yakd=true in "addons-695107"
	I1217 19:25:05.837248  377556 addons.go:70] Setting registry=true in profile "addons-695107"
	I1217 19:25:05.837204  377556 addons.go:70] Setting volcano=true in profile "addons-695107"
	I1217 19:25:05.837290  377556 addons.go:239] Setting addon registry=true in "addons-695107"
	I1217 19:25:05.837295  377556 host.go:66] Checking if "addons-695107" exists ...
	I1217 19:25:05.837309  377556 addons.go:239] Setting addon volcano=true in "addons-695107"
	I1217 19:25:05.837322  377556 host.go:66] Checking if "addons-695107" exists ...
	I1217 19:25:05.837328  377556 host.go:66] Checking if "addons-695107" exists ...
	I1217 19:25:05.837526  377556 addons.go:70] Setting ingress-dns=true in profile "addons-695107"
	I1217 19:25:05.837692  377556 addons.go:239] Setting addon ingress-dns=true in "addons-695107"
	I1217 19:25:05.837217  377556 addons.go:70] Setting storage-provisioner=true in profile "addons-695107"
	I1217 19:25:05.837750  377556 addons.go:239] Setting addon storage-provisioner=true in "addons-695107"
	I1217 19:25:05.837778  377556 host.go:66] Checking if "addons-695107" exists ...
	I1217 19:25:05.837822  377556 cli_runner.go:164] Run: docker container inspect addons-695107 --format={{.State.Status}}
	I1217 19:25:05.837824  377556 cli_runner.go:164] Run: docker container inspect addons-695107 --format={{.State.Status}}
	I1217 19:25:05.837829  377556 cli_runner.go:164] Run: docker container inspect addons-695107 --format={{.State.Status}}
	I1217 19:25:05.837829  377556 cli_runner.go:164] Run: docker container inspect addons-695107 --format={{.State.Status}}
	I1217 19:25:05.837197  377556 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-695107"
	I1217 19:25:05.837957  377556 host.go:66] Checking if "addons-695107" exists ...
	I1217 19:25:05.838212  377556 cli_runner.go:164] Run: docker container inspect addons-695107 --format={{.State.Status}}
	I1217 19:25:05.838259  377556 cli_runner.go:164] Run: docker container inspect addons-695107 --format={{.State.Status}}
	I1217 19:25:05.837580  377556 addons.go:70] Setting default-storageclass=true in profile "addons-695107"
	I1217 19:25:05.838682  377556 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-695107"
	I1217 19:25:05.838988  377556 cli_runner.go:164] Run: docker container inspect addons-695107 --format={{.State.Status}}
	I1217 19:25:05.839205  377556 cli_runner.go:164] Run: docker container inspect addons-695107 --format={{.State.Status}}
	I1217 19:25:05.837613  377556 addons.go:70] Setting ingress=true in profile "addons-695107"
	I1217 19:25:05.839267  377556 addons.go:239] Setting addon ingress=true in "addons-695107"
	I1217 19:25:05.839309  377556 host.go:66] Checking if "addons-695107" exists ...
	I1217 19:25:05.837592  377556 addons.go:70] Setting metrics-server=true in profile "addons-695107"
	I1217 19:25:05.839410  377556 addons.go:239] Setting addon metrics-server=true in "addons-695107"
	I1217 19:25:05.839441  377556 host.go:66] Checking if "addons-695107" exists ...
	I1217 19:25:05.839929  377556 cli_runner.go:164] Run: docker container inspect addons-695107 --format={{.State.Status}}
	I1217 19:25:05.837575  377556 addons.go:70] Setting gcp-auth=true in profile "addons-695107"
	I1217 19:25:05.841012  377556 mustload.go:66] Loading cluster: addons-695107
	I1217 19:25:05.837602  377556 addons.go:70] Setting inspektor-gadget=true in profile "addons-695107"
	I1217 19:25:05.841056  377556 addons.go:239] Setting addon inspektor-gadget=true in "addons-695107"
	I1217 19:25:05.841106  377556 host.go:66] Checking if "addons-695107" exists ...
	I1217 19:25:05.841261  377556 config.go:182] Loaded profile config "addons-695107": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 19:25:05.841529  377556 cli_runner.go:164] Run: docker container inspect addons-695107 --format={{.State.Status}}
	I1217 19:25:05.841543  377556 cli_runner.go:164] Run: docker container inspect addons-695107 --format={{.State.Status}}
	I1217 19:25:05.837266  377556 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-695107"
	I1217 19:25:05.841964  377556 host.go:66] Checking if "addons-695107" exists ...
	I1217 19:25:05.837829  377556 cli_runner.go:164] Run: docker container inspect addons-695107 --format={{.State.Status}}
	I1217 19:25:05.837620  377556 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-695107"
	I1217 19:25:05.843986  377556 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-695107"
	I1217 19:25:05.844047  377556 host.go:66] Checking if "addons-695107" exists ...
	I1217 19:25:05.844655  377556 cli_runner.go:164] Run: docker container inspect addons-695107 --format={{.State.Status}}
	I1217 19:25:05.837660  377556 addons.go:239] Setting addon registry-creds=true in "addons-695107"
	I1217 19:25:05.837628  377556 addons.go:70] Setting cloud-spanner=true in profile "addons-695107"
	I1217 19:25:05.847213  377556 out.go:179] * Verifying Kubernetes components...
	I1217 19:25:05.848193  377556 host.go:66] Checking if "addons-695107" exists ...
	I1217 19:25:05.848450  377556 addons.go:239] Setting addon cloud-spanner=true in "addons-695107"
	I1217 19:25:05.848491  377556 host.go:66] Checking if "addons-695107" exists ...
	I1217 19:25:05.848716  377556 cli_runner.go:164] Run: docker container inspect addons-695107 --format={{.State.Status}}
	I1217 19:25:05.849903  377556 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 19:25:05.850631  377556 cli_runner.go:164] Run: docker container inspect addons-695107 --format={{.State.Status}}
	I1217 19:25:05.850770  377556 cli_runner.go:164] Run: docker container inspect addons-695107 --format={{.State.Status}}
	I1217 19:25:05.852239  377556 cli_runner.go:164] Run: docker container inspect addons-695107 --format={{.State.Status}}
	I1217 19:25:05.911067  377556 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1217 19:25:05.914710  377556 out.go:179]   - Using image docker.io/registry:3.0.0
	I1217 19:25:05.918119  377556 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.6
	I1217 19:25:05.918545  377556 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
	I1217 19:25:05.918570  377556 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1217 19:25:05.918636  377556 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-695107
	I1217 19:25:05.919696  377556 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1217 19:25:05.919768  377556 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1217 19:25:05.919853  377556 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-695107
	I1217 19:25:05.921728  377556 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1217 19:25:05.923193  377556 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1217 19:25:05.923215  377556 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1217 19:25:05.923288  377556 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-695107
	I1217 19:25:05.923658  377556 addons.go:239] Setting addon default-storageclass=true in "addons-695107"
	I1217 19:25:05.923710  377556 host.go:66] Checking if "addons-695107" exists ...
	I1217 19:25:05.924250  377556 cli_runner.go:164] Run: docker container inspect addons-695107 --format={{.State.Status}}
	I1217 19:25:05.931850  377556 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1217 19:25:05.931930  377556 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.45
	W1217 19:25:05.932354  377556 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1217 19:25:05.933236  377556 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
	I1217 19:25:05.933255  377556 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1217 19:25:05.933313  377556 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-695107
	I1217 19:25:05.933886  377556 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1217 19:25:05.934569  377556 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1217 19:25:05.934727  377556 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-695107
	I1217 19:25:05.941047  377556 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-695107"
	I1217 19:25:05.941118  377556 host.go:66] Checking if "addons-695107" exists ...
	I1217 19:25:05.934616  377556 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1217 19:25:05.942982  377556 cli_runner.go:164] Run: docker container inspect addons-695107 --format={{.State.Status}}
	I1217 19:25:05.945897  377556 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.1
	I1217 19:25:05.945956  377556 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1217 19:25:05.946046  377556 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1217 19:25:05.946149  377556 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 19:25:05.946975  377556 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1217 19:25:05.947052  377556 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-695107
	I1217 19:25:05.947366  377556 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1217 19:25:05.947385  377556 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1217 19:25:05.947387  377556 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1217 19:25:05.947404  377556 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1217 19:25:05.947434  377556 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-695107
	I1217 19:25:05.947457  377556 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-695107
	I1217 19:25:05.948099  377556 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1217 19:25:05.948117  377556 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1217 19:25:05.948161  377556 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-695107
	I1217 19:25:05.959889  377556 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1217 19:25:05.965754  377556 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1217 19:25:05.966859  377556 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1217 19:25:05.966886  377556 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1217 19:25:05.966956  377556 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-695107
	I1217 19:25:05.968444  377556 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1217 19:25:05.970229  377556 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1217 19:25:05.971667  377556 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.14.1
	I1217 19:25:05.972871  377556 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1217 19:25:05.972936  377556 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I1217 19:25:05.974272  377556 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I1217 19:25:05.974333  377556 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1217 19:25:05.978357  377556 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1217 19:25:05.978392  377556 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1217 19:25:05.978454  377556 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-695107
	I1217 19:25:05.979960  377556 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1217 19:25:05.980667  377556 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1217 19:25:05.980938  377556 host.go:66] Checking if "addons-695107" exists ...
	I1217 19:25:05.981112  377556 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22186-372245/.minikube/machines/addons-695107/id_rsa Username:docker}
	I1217 19:25:05.982258  377556 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1217 19:25:05.983974  377556 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1217 19:25:05.985161  377556 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1217 19:25:05.985185  377556 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1217 19:25:05.985255  377556 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-695107
	I1217 19:25:06.011979  377556 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.47.0
	I1217 19:25:06.013263  377556 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22186-372245/.minikube/machines/addons-695107/id_rsa Username:docker}
	I1217 19:25:06.013832  377556 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1217 19:25:06.013910  377556 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1217 19:25:06.014033  377556 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-695107
	I1217 19:25:06.014718  377556 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22186-372245/.minikube/machines/addons-695107/id_rsa Username:docker}
	I1217 19:25:06.019357  377556 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22186-372245/.minikube/machines/addons-695107/id_rsa Username:docker}
	I1217 19:25:06.023295  377556 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22186-372245/.minikube/machines/addons-695107/id_rsa Username:docker}
	I1217 19:25:06.025182  377556 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22186-372245/.minikube/machines/addons-695107/id_rsa Username:docker}
	I1217 19:25:06.032141  377556 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22186-372245/.minikube/machines/addons-695107/id_rsa Username:docker}
	I1217 19:25:06.035254  377556 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22186-372245/.minikube/machines/addons-695107/id_rsa Username:docker}
	I1217 19:25:06.035767  377556 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22186-372245/.minikube/machines/addons-695107/id_rsa Username:docker}
	I1217 19:25:06.039058  377556 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1217 19:25:06.039924  377556 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1217 19:25:06.039944  377556 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1217 19:25:06.040014  377556 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-695107
	I1217 19:25:06.043652  377556 out.go:179]   - Using image docker.io/busybox:stable
	I1217 19:25:06.050818  377556 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1217 19:25:06.050843  377556 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1217 19:25:06.050912  377556 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-695107
	I1217 19:25:06.053628  377556 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22186-372245/.minikube/machines/addons-695107/id_rsa Username:docker}
	I1217 19:25:06.055242  377556 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22186-372245/.minikube/machines/addons-695107/id_rsa Username:docker}
	I1217 19:25:06.057119  377556 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22186-372245/.minikube/machines/addons-695107/id_rsa Username:docker}
	W1217 19:25:06.060510  377556 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1217 19:25:06.060658  377556 retry.go:31] will retry after 350.294268ms: ssh: handshake failed: EOF
	I1217 19:25:06.075670  377556 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22186-372245/.minikube/machines/addons-695107/id_rsa Username:docker}
	I1217 19:25:06.092660  377556 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22186-372245/.minikube/machines/addons-695107/id_rsa Username:docker}
	W1217 19:25:06.099148  377556 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1217 19:25:06.099180  377556 retry.go:31] will retry after 144.182296ms: ssh: handshake failed: EOF
	I1217 19:25:06.104729  377556 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22186-372245/.minikube/machines/addons-695107/id_rsa Username:docker}
	W1217 19:25:06.108930  377556 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1217 19:25:06.108959  377556 retry.go:31] will retry after 296.035682ms: ssh: handshake failed: EOF
	I1217 19:25:06.113860  377556 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 19:25:06.191340  377556 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1217 19:25:06.191368  377556 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1217 19:25:06.200312  377556 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1217 19:25:06.207990  377556 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1217 19:25:06.216064  377556 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1217 19:25:06.216117  377556 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1217 19:25:06.220673  377556 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1217 19:25:06.220697  377556 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1217 19:25:06.233339  377556 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1217 19:25:06.233935  377556 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1217 19:25:06.244295  377556 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 19:25:06.244883  377556 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1217 19:25:06.245159  377556 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1217 19:25:06.255307  377556 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1217 19:25:06.255336  377556 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1217 19:25:06.258354  377556 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
	I1217 19:25:06.258377  377556 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1217 19:25:06.260855  377556 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1217 19:25:06.260883  377556 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1217 19:25:06.267551  377556 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml
	I1217 19:25:06.276693  377556 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1217 19:25:06.276741  377556 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1217 19:25:06.278671  377556 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1217 19:25:06.286450  377556 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1217 19:25:06.286472  377556 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1217 19:25:06.309616  377556 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1217 19:25:06.310231  377556 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1217 19:25:06.310256  377556 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1217 19:25:06.313625  377556 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1217 19:25:06.313652  377556 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1217 19:25:06.330896  377556 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1217 19:25:06.330935  377556 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1217 19:25:06.357698  377556 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1217 19:25:06.361915  377556 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1217 19:25:06.361946  377556 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1217 19:25:06.366448  377556 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1217 19:25:06.366487  377556 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1217 19:25:06.391773  377556 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1217 19:25:06.391806  377556 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1217 19:25:06.416797  377556 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1217 19:25:06.416836  377556 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1217 19:25:06.425379  377556 node_ready.go:35] waiting up to 6m0s for node "addons-695107" to be "Ready" ...
	I1217 19:25:06.425666  377556 start.go:977] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1217 19:25:06.447332  377556 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1217 19:25:06.447360  377556 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1217 19:25:06.469277  377556 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1217 19:25:06.469308  377556 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1217 19:25:06.494810  377556 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1217 19:25:06.512413  377556 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1217 19:25:06.517859  377556 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1217 19:25:06.542186  377556 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1217 19:25:06.542220  377556 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1217 19:25:06.612089  377556 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1217 19:25:06.612116  377556 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1217 19:25:06.676652  377556 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1217 19:25:06.676688  377556 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1217 19:25:06.686201  377556 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1217 19:25:06.696617  377556 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1217 19:25:06.766425  377556 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1217 19:25:06.766452  377556 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1217 19:25:06.832480  377556 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1217 19:25:06.832511  377556 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1217 19:25:06.911292  377556 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1217 19:25:06.911327  377556 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1217 19:25:06.950756  377556 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-695107" context rescaled to 1 replicas
	I1217 19:25:06.989556  377556 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1217 19:25:07.763061  377556 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (1.529039341s)
	I1217 19:25:07.763144  377556 addons.go:495] Verifying addon ingress=true in "addons-695107"
	I1217 19:25:07.763210  377556 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.518605361s)
	I1217 19:25:07.763343  377556 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml: (1.495767329s)
	I1217 19:25:07.763387  377556 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (1.484695807s)
	I1217 19:25:07.763467  377556 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.453815712s)
	I1217 19:25:07.763500  377556 addons.go:495] Verifying addon metrics-server=true in "addons-695107"
	I1217 19:25:07.763748  377556 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (1.406009372s)
	I1217 19:25:07.763901  377556 addons.go:495] Verifying addon registry=true in "addons-695107"
	I1217 19:25:07.763836  377556 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.268996057s)
	I1217 19:25:07.764822  377556 out.go:179] * Verifying ingress addon...
	I1217 19:25:07.765646  377556 out.go:179] * Verifying registry addon...
	I1217 19:25:07.767373  377556 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1217 19:25:07.769154  377556 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1217 19:25:07.770884  377556 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1217 19:25:07.770903  377556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:25:07.772190  377556 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1217 19:25:07.772209  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 19:25:08.238972  377556 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.726502222s)
	I1217 19:25:08.239005  377556 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (1.721110576s)
	W1217 19:25:08.239029  377556 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1217 19:25:08.239055  377556 retry.go:31] will retry after 347.64053ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1217 19:25:08.239063  377556 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (1.552823259s)
	I1217 19:25:08.239135  377556 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (1.542491348s)
	I1217 19:25:08.239326  377556 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (1.249729211s)
	I1217 19:25:08.239358  377556 addons.go:495] Verifying addon csi-hostpath-driver=true in "addons-695107"
	I1217 19:25:08.241573  377556 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-695107 service yakd-dashboard -n yakd-dashboard
	
	I1217 19:25:08.241575  377556 out.go:179] * Verifying csi-hostpath-driver addon...
	I1217 19:25:08.244150  377556 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1217 19:25:08.247169  377556 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1217 19:25:08.247200  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:25:08.270188  377556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:25:08.271612  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1217 19:25:08.429207  377556 node_ready.go:57] node "addons-695107" has "Ready":"False" status (will retry)
	I1217 19:25:08.586889  377556 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1217 19:25:08.747411  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:25:08.771668  377556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:25:08.771820  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 19:25:09.249300  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:25:09.271220  377556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:25:09.271820  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 19:25:09.748069  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:25:09.770918  377556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:25:09.771282  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 19:25:10.247620  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:25:10.270110  377556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:25:10.271665  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 19:25:10.747782  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:25:10.770647  377556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:25:10.772274  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1217 19:25:10.928779  377556 node_ready.go:57] node "addons-695107" has "Ready":"False" status (will retry)
	I1217 19:25:11.073809  377556 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.486867337s)
	I1217 19:25:11.248435  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:25:11.271238  377556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:25:11.271872  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 19:25:11.747561  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:25:11.770588  377556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:25:11.771968  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 19:25:12.247606  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:25:12.270259  377556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:25:12.272741  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 19:25:12.747945  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:25:12.771045  377556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:25:12.771513  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1217 19:25:12.928993  377556 node_ready.go:57] node "addons-695107" has "Ready":"False" status (will retry)
	I1217 19:25:13.248125  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:25:13.270716  377556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:25:13.271576  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 19:25:13.587667  377556 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1217 19:25:13.587772  377556 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-695107
	I1217 19:25:13.606614  377556 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22186-372245/.minikube/machines/addons-695107/id_rsa Username:docker}
	I1217 19:25:13.722163  377556 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1217 19:25:13.735841  377556 addons.go:239] Setting addon gcp-auth=true in "addons-695107"
	I1217 19:25:13.735895  377556 host.go:66] Checking if "addons-695107" exists ...
	I1217 19:25:13.736332  377556 cli_runner.go:164] Run: docker container inspect addons-695107 --format={{.State.Status}}
	I1217 19:25:13.747930  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:25:13.754684  377556 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1217 19:25:13.754737  377556 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-695107
	I1217 19:25:13.771176  377556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:25:13.772406  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 19:25:13.773445  377556 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22186-372245/.minikube/machines/addons-695107/id_rsa Username:docker}
	I1217 19:25:13.874256  377556 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.5
	I1217 19:25:13.875702  377556 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1217 19:25:13.876906  377556 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1217 19:25:13.876922  377556 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1217 19:25:13.890953  377556 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1217 19:25:13.890991  377556 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1217 19:25:13.904131  377556 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1217 19:25:13.904153  377556 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1217 19:25:13.916609  377556 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1217 19:25:14.232995  377556 addons.go:495] Verifying addon gcp-auth=true in "addons-695107"
	I1217 19:25:14.234279  377556 out.go:179] * Verifying gcp-auth addon...
	I1217 19:25:14.238188  377556 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1217 19:25:14.240428  377556 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1217 19:25:14.240449  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:25:14.247197  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:25:14.348473  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 19:25:14.348576  377556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:25:14.741125  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:25:14.747412  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:25:14.771549  377556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:25:14.771726  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 19:25:15.242454  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:25:15.247175  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:25:15.271058  377556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:25:15.271484  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1217 19:25:15.428547  377556 node_ready.go:57] node "addons-695107" has "Ready":"False" status (will retry)
	I1217 19:25:15.741692  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:25:15.747053  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:25:15.770961  377556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:25:15.772388  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 19:25:16.242101  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:25:16.247617  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:25:16.270705  377556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:25:16.272242  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 19:25:16.741533  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:25:16.747052  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:25:16.771472  377556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:25:16.772510  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 19:25:17.242013  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:25:17.247595  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:25:17.270551  377556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:25:17.271968  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1217 19:25:17.428774  377556 node_ready.go:57] node "addons-695107" has "Ready":"False" status (will retry)
	I1217 19:25:17.741780  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:25:17.746915  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:25:17.770952  377556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:25:17.772610  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 19:25:18.241407  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:25:18.246968  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:25:18.271024  377556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:25:18.272199  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 19:25:18.742705  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:25:18.763416  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:25:18.783002  377556 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1217 19:25:18.783031  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 19:25:18.783901  377556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:25:18.927917  377556 node_ready.go:49] node "addons-695107" is "Ready"
	I1217 19:25:18.927952  377556 node_ready.go:38] duration metric: took 12.502528031s for node "addons-695107" to be "Ready" ...
	I1217 19:25:18.927991  377556 api_server.go:52] waiting for apiserver process to appear ...
	I1217 19:25:18.928059  377556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 19:25:18.942890  377556 api_server.go:72] duration metric: took 13.105919689s to wait for apiserver process to appear ...
	I1217 19:25:18.942923  377556 api_server.go:88] waiting for apiserver healthz status ...
	I1217 19:25:18.942956  377556 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1217 19:25:18.947693  377556 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1217 19:25:18.948672  377556 api_server.go:141] control plane version: v1.34.3
	I1217 19:25:18.948699  377556 api_server.go:131] duration metric: took 5.769192ms to wait for apiserver health ...
	I1217 19:25:18.948711  377556 system_pods.go:43] waiting for kube-system pods to appear ...
	I1217 19:25:18.953823  377556 system_pods.go:59] 20 kube-system pods found
	I1217 19:25:18.953886  377556 system_pods.go:61] "amd-gpu-device-plugin-xl62h" [e36b51fd-d2b7-4d84-92fd-3f234d68f8f8] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1217 19:25:18.953913  377556 system_pods.go:61] "coredns-66bc5c9577-gqcjx" [22d6cc15-657e-4859-9aaf-1584f8ce161d] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 19:25:18.953928  377556 system_pods.go:61] "csi-hostpath-attacher-0" [b969567d-e5f9-4e6d-a303-02db0e756eec] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1217 19:25:18.953937  377556 system_pods.go:61] "csi-hostpath-resizer-0" [e4954ec6-aa54-40c6-9c84-70287d004936] Pending
	I1217 19:25:18.953947  377556 system_pods.go:61] "csi-hostpathplugin-j4557" [971e8c2b-7ddd-4d3f-84f8-e3a736f466b4] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1217 19:25:18.953955  377556 system_pods.go:61] "etcd-addons-695107" [70b11d43-62b9-4529-9f0c-8307f62e449c] Running
	I1217 19:25:18.953961  377556 system_pods.go:61] "kindnet-dkw9t" [b177cd3a-1117-4c7f-b24d-8872ec987afc] Running
	I1217 19:25:18.953970  377556 system_pods.go:61] "kube-apiserver-addons-695107" [b26057f7-6504-4cab-beba-289a4ebc7ca5] Running
	I1217 19:25:18.953975  377556 system_pods.go:61] "kube-controller-manager-addons-695107" [5bb06b51-7d12-402c-bd06-507791a2d2a5] Running
	I1217 19:25:18.953988  377556 system_pods.go:61] "kube-ingress-dns-minikube" [8033a74b-624e-496a-a8e1-f1e3a179e00d] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1217 19:25:18.953997  377556 system_pods.go:61] "kube-proxy-fqlbd" [a49f20f8-88d7-43f9-9616-20d6b8e3f194] Running
	I1217 19:25:18.954003  377556 system_pods.go:61] "kube-scheduler-addons-695107" [01d6c101-5044-4481-aeb2-45cca581927b] Running
	I1217 19:25:18.954013  377556 system_pods.go:61] "metrics-server-85b7d694d7-tqbbx" [f8c2c133-1dbb-4007-8e9f-dbd891b5c4e1] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1217 19:25:18.954024  377556 system_pods.go:61] "nvidia-device-plugin-daemonset-5hdv7" [2bc6b0b1-2270-4abe-b5d5-2dc24f542121] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1217 19:25:18.954032  377556 system_pods.go:61] "registry-6b586f9694-2jvdr" [d850b7ca-185a-40a6-bd67-035ed864cc70] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1217 19:25:18.954044  377556 system_pods.go:61] "registry-creds-764b6fb674-lglwq" [58c8feae-1fa3-4ac5-b69e-212b116a2c16] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1217 19:25:18.954050  377556 system_pods.go:61] "registry-proxy-8dlbt" [2eed962d-54b9-4a44-a7d8-38bf999b5d29] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1217 19:25:18.954054  377556 system_pods.go:61] "snapshot-controller-7d9fbc56b8-fgm6r" [e670abf9-bb25-4083-85d2-67fd2aa6d734] Pending
	I1217 19:25:18.954061  377556 system_pods.go:61] "snapshot-controller-7d9fbc56b8-pvnhq" [ddc2a00e-7044-4134-a0e5-a9ce980af62e] Pending
	I1217 19:25:18.954065  377556 system_pods.go:61] "storage-provisioner" [58d8e209-60d8-4105-bd32-336cde196461] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1217 19:25:18.954085  377556 system_pods.go:74] duration metric: took 5.356317ms to wait for pod list to return data ...
	I1217 19:25:18.954095  377556 default_sa.go:34] waiting for default service account to be created ...
	I1217 19:25:18.956186  377556 default_sa.go:45] found service account: "default"
	I1217 19:25:18.956207  377556 default_sa.go:55] duration metric: took 2.103899ms for default service account to be created ...
	I1217 19:25:18.956218  377556 system_pods.go:116] waiting for k8s-apps to be running ...
	I1217 19:25:18.959351  377556 system_pods.go:86] 20 kube-system pods found
	I1217 19:25:18.959390  377556 system_pods.go:89] "amd-gpu-device-plugin-xl62h" [e36b51fd-d2b7-4d84-92fd-3f234d68f8f8] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1217 19:25:18.959402  377556 system_pods.go:89] "coredns-66bc5c9577-gqcjx" [22d6cc15-657e-4859-9aaf-1584f8ce161d] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 19:25:18.959416  377556 system_pods.go:89] "csi-hostpath-attacher-0" [b969567d-e5f9-4e6d-a303-02db0e756eec] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1217 19:25:18.959423  377556 system_pods.go:89] "csi-hostpath-resizer-0" [e4954ec6-aa54-40c6-9c84-70287d004936] Pending
	I1217 19:25:18.959432  377556 system_pods.go:89] "csi-hostpathplugin-j4557" [971e8c2b-7ddd-4d3f-84f8-e3a736f466b4] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1217 19:25:18.959441  377556 system_pods.go:89] "etcd-addons-695107" [70b11d43-62b9-4529-9f0c-8307f62e449c] Running
	I1217 19:25:18.959448  377556 system_pods.go:89] "kindnet-dkw9t" [b177cd3a-1117-4c7f-b24d-8872ec987afc] Running
	I1217 19:25:18.959460  377556 system_pods.go:89] "kube-apiserver-addons-695107" [b26057f7-6504-4cab-beba-289a4ebc7ca5] Running
	I1217 19:25:18.959467  377556 system_pods.go:89] "kube-controller-manager-addons-695107" [5bb06b51-7d12-402c-bd06-507791a2d2a5] Running
	I1217 19:25:18.959479  377556 system_pods.go:89] "kube-ingress-dns-minikube" [8033a74b-624e-496a-a8e1-f1e3a179e00d] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1217 19:25:18.959484  377556 system_pods.go:89] "kube-proxy-fqlbd" [a49f20f8-88d7-43f9-9616-20d6b8e3f194] Running
	I1217 19:25:18.959492  377556 system_pods.go:89] "kube-scheduler-addons-695107" [01d6c101-5044-4481-aeb2-45cca581927b] Running
	I1217 19:25:18.959497  377556 system_pods.go:89] "metrics-server-85b7d694d7-tqbbx" [f8c2c133-1dbb-4007-8e9f-dbd891b5c4e1] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1217 19:25:18.959506  377556 system_pods.go:89] "nvidia-device-plugin-daemonset-5hdv7" [2bc6b0b1-2270-4abe-b5d5-2dc24f542121] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1217 19:25:18.959513  377556 system_pods.go:89] "registry-6b586f9694-2jvdr" [d850b7ca-185a-40a6-bd67-035ed864cc70] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1217 19:25:18.959520  377556 system_pods.go:89] "registry-creds-764b6fb674-lglwq" [58c8feae-1fa3-4ac5-b69e-212b116a2c16] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1217 19:25:18.959527  377556 system_pods.go:89] "registry-proxy-8dlbt" [2eed962d-54b9-4a44-a7d8-38bf999b5d29] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1217 19:25:18.959536  377556 system_pods.go:89] "snapshot-controller-7d9fbc56b8-fgm6r" [e670abf9-bb25-4083-85d2-67fd2aa6d734] Pending
	I1217 19:25:18.959542  377556 system_pods.go:89] "snapshot-controller-7d9fbc56b8-pvnhq" [ddc2a00e-7044-4134-a0e5-a9ce980af62e] Pending
	I1217 19:25:18.959552  377556 system_pods.go:89] "storage-provisioner" [58d8e209-60d8-4105-bd32-336cde196461] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1217 19:25:18.959573  377556 retry.go:31] will retry after 237.14182ms: missing components: kube-dns
	I1217 19:25:19.211050  377556 system_pods.go:86] 20 kube-system pods found
	I1217 19:25:19.211118  377556 system_pods.go:89] "amd-gpu-device-plugin-xl62h" [e36b51fd-d2b7-4d84-92fd-3f234d68f8f8] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1217 19:25:19.211129  377556 system_pods.go:89] "coredns-66bc5c9577-gqcjx" [22d6cc15-657e-4859-9aaf-1584f8ce161d] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 19:25:19.211138  377556 system_pods.go:89] "csi-hostpath-attacher-0" [b969567d-e5f9-4e6d-a303-02db0e756eec] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1217 19:25:19.211146  377556 system_pods.go:89] "csi-hostpath-resizer-0" [e4954ec6-aa54-40c6-9c84-70287d004936] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1217 19:25:19.211155  377556 system_pods.go:89] "csi-hostpathplugin-j4557" [971e8c2b-7ddd-4d3f-84f8-e3a736f466b4] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1217 19:25:19.211162  377556 system_pods.go:89] "etcd-addons-695107" [70b11d43-62b9-4529-9f0c-8307f62e449c] Running
	I1217 19:25:19.211169  377556 system_pods.go:89] "kindnet-dkw9t" [b177cd3a-1117-4c7f-b24d-8872ec987afc] Running
	I1217 19:25:19.211175  377556 system_pods.go:89] "kube-apiserver-addons-695107" [b26057f7-6504-4cab-beba-289a4ebc7ca5] Running
	I1217 19:25:19.211191  377556 system_pods.go:89] "kube-controller-manager-addons-695107" [5bb06b51-7d12-402c-bd06-507791a2d2a5] Running
	I1217 19:25:19.211200  377556 system_pods.go:89] "kube-ingress-dns-minikube" [8033a74b-624e-496a-a8e1-f1e3a179e00d] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1217 19:25:19.211211  377556 system_pods.go:89] "kube-proxy-fqlbd" [a49f20f8-88d7-43f9-9616-20d6b8e3f194] Running
	I1217 19:25:19.211218  377556 system_pods.go:89] "kube-scheduler-addons-695107" [01d6c101-5044-4481-aeb2-45cca581927b] Running
	I1217 19:25:19.211234  377556 system_pods.go:89] "metrics-server-85b7d694d7-tqbbx" [f8c2c133-1dbb-4007-8e9f-dbd891b5c4e1] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1217 19:25:19.211249  377556 system_pods.go:89] "nvidia-device-plugin-daemonset-5hdv7" [2bc6b0b1-2270-4abe-b5d5-2dc24f542121] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1217 19:25:19.211265  377556 system_pods.go:89] "registry-6b586f9694-2jvdr" [d850b7ca-185a-40a6-bd67-035ed864cc70] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1217 19:25:19.211273  377556 system_pods.go:89] "registry-creds-764b6fb674-lglwq" [58c8feae-1fa3-4ac5-b69e-212b116a2c16] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1217 19:25:19.211287  377556 system_pods.go:89] "registry-proxy-8dlbt" [2eed962d-54b9-4a44-a7d8-38bf999b5d29] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1217 19:25:19.211299  377556 system_pods.go:89] "snapshot-controller-7d9fbc56b8-fgm6r" [e670abf9-bb25-4083-85d2-67fd2aa6d734] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1217 19:25:19.211309  377556 system_pods.go:89] "snapshot-controller-7d9fbc56b8-pvnhq" [ddc2a00e-7044-4134-a0e5-a9ce980af62e] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1217 19:25:19.211316  377556 system_pods.go:89] "storage-provisioner" [58d8e209-60d8-4105-bd32-336cde196461] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1217 19:25:19.211340  377556 retry.go:31] will retry after 272.576358ms: missing components: kube-dns
	I1217 19:25:19.263619  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:25:19.263709  377556 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1217 19:25:19.263726  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:25:19.274862  377556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:25:19.275785  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 19:25:19.490624  377556 system_pods.go:86] 20 kube-system pods found
	I1217 19:25:19.490665  377556 system_pods.go:89] "amd-gpu-device-plugin-xl62h" [e36b51fd-d2b7-4d84-92fd-3f234d68f8f8] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1217 19:25:19.490677  377556 system_pods.go:89] "coredns-66bc5c9577-gqcjx" [22d6cc15-657e-4859-9aaf-1584f8ce161d] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 19:25:19.490689  377556 system_pods.go:89] "csi-hostpath-attacher-0" [b969567d-e5f9-4e6d-a303-02db0e756eec] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1217 19:25:19.490697  377556 system_pods.go:89] "csi-hostpath-resizer-0" [e4954ec6-aa54-40c6-9c84-70287d004936] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1217 19:25:19.490706  377556 system_pods.go:89] "csi-hostpathplugin-j4557" [971e8c2b-7ddd-4d3f-84f8-e3a736f466b4] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1217 19:25:19.490711  377556 system_pods.go:89] "etcd-addons-695107" [70b11d43-62b9-4529-9f0c-8307f62e449c] Running
	I1217 19:25:19.490717  377556 system_pods.go:89] "kindnet-dkw9t" [b177cd3a-1117-4c7f-b24d-8872ec987afc] Running
	I1217 19:25:19.490723  377556 system_pods.go:89] "kube-apiserver-addons-695107" [b26057f7-6504-4cab-beba-289a4ebc7ca5] Running
	I1217 19:25:19.490728  377556 system_pods.go:89] "kube-controller-manager-addons-695107" [5bb06b51-7d12-402c-bd06-507791a2d2a5] Running
	I1217 19:25:19.490750  377556 system_pods.go:89] "kube-ingress-dns-minikube" [8033a74b-624e-496a-a8e1-f1e3a179e00d] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1217 19:25:19.490757  377556 system_pods.go:89] "kube-proxy-fqlbd" [a49f20f8-88d7-43f9-9616-20d6b8e3f194] Running
	I1217 19:25:19.490764  377556 system_pods.go:89] "kube-scheduler-addons-695107" [01d6c101-5044-4481-aeb2-45cca581927b] Running
	I1217 19:25:19.490773  377556 system_pods.go:89] "metrics-server-85b7d694d7-tqbbx" [f8c2c133-1dbb-4007-8e9f-dbd891b5c4e1] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1217 19:25:19.490781  377556 system_pods.go:89] "nvidia-device-plugin-daemonset-5hdv7" [2bc6b0b1-2270-4abe-b5d5-2dc24f542121] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1217 19:25:19.490789  377556 system_pods.go:89] "registry-6b586f9694-2jvdr" [d850b7ca-185a-40a6-bd67-035ed864cc70] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1217 19:25:19.490797  377556 system_pods.go:89] "registry-creds-764b6fb674-lglwq" [58c8feae-1fa3-4ac5-b69e-212b116a2c16] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1217 19:25:19.490808  377556 system_pods.go:89] "registry-proxy-8dlbt" [2eed962d-54b9-4a44-a7d8-38bf999b5d29] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1217 19:25:19.490815  377556 system_pods.go:89] "snapshot-controller-7d9fbc56b8-fgm6r" [e670abf9-bb25-4083-85d2-67fd2aa6d734] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1217 19:25:19.490828  377556 system_pods.go:89] "snapshot-controller-7d9fbc56b8-pvnhq" [ddc2a00e-7044-4134-a0e5-a9ce980af62e] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1217 19:25:19.490833  377556 system_pods.go:89] "storage-provisioner" [58d8e209-60d8-4105-bd32-336cde196461] Running
	I1217 19:25:19.490847  377556 system_pods.go:126] duration metric: took 534.620447ms to wait for k8s-apps to be running ...
	I1217 19:25:19.490860  377556 system_svc.go:44] waiting for kubelet service to be running ....
	I1217 19:25:19.490919  377556 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 19:25:19.510563  377556 system_svc.go:56] duration metric: took 19.691657ms WaitForService to wait for kubelet
	I1217 19:25:19.510604  377556 kubeadm.go:587] duration metric: took 13.673636357s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1217 19:25:19.510631  377556 node_conditions.go:102] verifying NodePressure condition ...
	I1217 19:25:19.514637  377556 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1217 19:25:19.514671  377556 node_conditions.go:123] node cpu capacity is 8
	I1217 19:25:19.514695  377556 node_conditions.go:105] duration metric: took 4.054182ms to run NodePressure ...
	I1217 19:25:19.514717  377556 start.go:242] waiting for startup goroutines ...
	I1217 19:25:19.743485  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:25:19.748149  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:25:19.772213  377556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:25:19.773006  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 19:25:20.241714  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:25:20.247875  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:25:20.271286  377556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:25:20.272452  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 19:25:20.743039  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:25:20.748003  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:25:20.771555  377556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:25:20.772727  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 19:25:21.242433  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:25:21.246579  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:25:21.270925  377556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:25:21.272259  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 19:25:21.742861  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:25:21.747910  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:25:21.772565  377556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:25:21.773045  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 19:25:22.241595  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:25:22.247950  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:25:22.271230  377556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:25:22.273469  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 19:25:22.742139  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:25:22.748002  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:25:22.771274  377556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:25:22.772710  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 19:25:23.242039  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:25:23.248344  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:25:23.271824  377556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:25:23.272005  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 19:25:23.743018  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:25:23.748205  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:25:23.771571  377556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:25:23.772831  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 19:25:24.240954  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:25:24.247882  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:25:24.270537  377556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:25:24.272201  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 19:25:24.741825  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:25:24.747621  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:25:24.770832  377556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:25:24.772668  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 19:25:25.242284  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:25:25.248273  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:25:25.271495  377556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:25:25.271777  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 19:25:25.742351  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:25:25.746788  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:25:25.771099  377556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:25:25.772689  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 19:25:26.242477  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:25:26.247938  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:25:26.271020  377556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:25:26.273136  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 19:25:26.741881  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:25:26.747688  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:25:26.770790  377556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:25:26.772353  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 19:25:27.241663  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:25:27.247664  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:25:27.271630  377556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:25:27.272143  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 19:25:27.742199  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:25:27.748111  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:25:27.771990  377556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:25:27.773619  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 19:25:28.242509  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:25:28.247314  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:25:28.271471  377556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:25:28.272045  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 19:25:28.741900  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:25:28.747677  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:25:28.770771  377556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:25:28.772676  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 19:25:29.242751  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:25:29.249825  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:25:29.317130  377556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:25:29.317260  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 19:25:29.742959  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:25:29.748230  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:25:29.771588  377556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:25:29.773110  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 19:25:30.241924  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:25:30.247325  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:25:30.272032  377556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:25:30.272211  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 19:25:30.741667  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:25:30.747180  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:25:30.770965  377556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:25:30.772779  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 19:25:31.242608  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:25:31.247053  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:25:31.270929  377556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:25:31.272496  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 19:25:31.741865  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:25:31.747945  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:25:31.771182  377556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:25:31.772799  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 19:25:32.242305  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:25:32.247289  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:25:32.271368  377556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:25:32.272810  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 19:25:32.742985  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:25:32.747704  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:25:32.770966  377556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:25:32.772534  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 19:25:33.242144  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:25:33.247557  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:25:33.271488  377556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:25:33.273710  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 19:25:33.742404  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:25:33.746739  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:25:33.771008  377556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:25:33.773286  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 19:25:34.242343  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:25:34.247731  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:25:34.271880  377556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:25:34.272113  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 19:25:34.742347  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:25:34.843268  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 19:25:34.843543  377556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:25:34.843574  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:25:35.242461  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:25:35.247275  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:25:35.271534  377556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:25:35.271777  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 19:25:35.742103  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:25:35.747578  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:25:35.771383  377556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:25:35.771776  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 19:25:36.241482  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:25:36.246909  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:25:36.270603  377556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:25:36.272338  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 19:25:36.741986  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:25:36.747267  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:25:36.770863  377556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:25:36.771594  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 19:25:37.241596  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:25:37.247312  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:25:37.271262  377556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:25:37.271926  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 19:25:37.741591  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:25:37.747167  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:25:37.771555  377556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:25:37.772691  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 19:25:38.241521  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:25:38.247472  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:25:38.271780  377556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:25:38.272370  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 19:25:38.741445  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:25:38.747135  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:25:38.770954  377556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:25:38.772675  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 19:25:39.267032  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:25:39.267137  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:25:39.309717  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 19:25:39.309987  377556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:25:39.742657  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:25:39.747308  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:25:39.771015  377556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:25:39.771758  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 19:25:40.242026  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:25:40.247998  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:25:40.271117  377556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:25:40.272390  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 19:25:40.741926  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:25:40.747878  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:25:40.771319  377556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:25:40.772460  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 19:25:41.242173  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:25:41.247954  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:25:41.270958  377556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:25:41.272451  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 19:25:41.743500  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:25:41.747230  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:25:41.771415  377556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:25:41.772657  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 19:25:42.242232  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:25:42.247909  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:25:42.271252  377556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:25:42.272475  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1217 19:25:42.742011  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:25:42.749482  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:25:42.772293  377556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:25:42.772589  377556 kapi.go:107] duration metric: took 35.003435052s to wait for kubernetes.io/minikube-addons=registry ...
	I1217 19:25:43.243167  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:25:43.343644  377556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:25:43.343644  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:25:43.741858  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:25:43.748529  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:25:43.771905  377556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:25:44.242099  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:25:44.248459  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:25:44.271897  377556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:25:44.742745  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:25:44.748093  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:25:44.771042  377556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:25:45.242686  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:25:45.247865  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:25:45.271099  377556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:25:45.743368  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:25:45.748400  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:25:45.772703  377556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:25:46.242429  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:25:46.249309  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:25:46.271960  377556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:25:46.742448  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:25:46.747682  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:25:46.771866  377556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:25:47.241791  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:25:47.247124  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:25:47.270972  377556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:25:47.743032  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:25:47.748969  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:25:47.770936  377556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:25:48.242454  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:25:48.247350  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:25:48.271623  377556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:25:48.742455  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:25:48.747401  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:25:48.771735  377556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:25:49.242631  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:25:49.247697  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:25:49.271960  377556 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1217 19:25:49.743069  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:25:49.748546  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:25:49.771989  377556 kapi.go:107] duration metric: took 42.00461088s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1217 19:25:50.241760  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:25:50.247332  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:25:50.741942  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:25:50.748005  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:25:51.242619  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:25:51.247205  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:25:51.741861  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:25:51.747658  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:25:52.241813  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:25:52.247640  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:25:52.742443  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:25:52.747171  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:25:53.242408  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1217 19:25:53.247256  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:25:53.742788  377556 kapi.go:107] duration metric: took 39.504610513s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1217 19:25:53.744624  377556 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-695107 cluster.
	I1217 19:25:53.746129  377556 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1217 19:25:53.747659  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:25:53.749192  377556 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1217 19:25:54.247529  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:25:54.748204  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:25:55.248107  377556 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1217 19:25:55.748326  377556 kapi.go:107] duration metric: took 47.504172695s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1217 19:25:55.750186  377556 out.go:179] * Enabled addons: registry-creds, amd-gpu-device-plugin, ingress-dns, storage-provisioner, inspektor-gadget, cloud-spanner, metrics-server, default-storageclass, nvidia-device-plugin, yakd, storage-provisioner-rancher, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I1217 19:25:55.751380  377556 addons.go:530] duration metric: took 49.914381753s for enable addons: enabled=[registry-creds amd-gpu-device-plugin ingress-dns storage-provisioner inspektor-gadget cloud-spanner metrics-server default-storageclass nvidia-device-plugin yakd storage-provisioner-rancher volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I1217 19:25:55.751433  377556 start.go:247] waiting for cluster config update ...
	I1217 19:25:55.751459  377556 start.go:256] writing updated cluster config ...
	I1217 19:25:55.751734  377556 ssh_runner.go:195] Run: rm -f paused
	I1217 19:25:55.755746  377556 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1217 19:25:55.758781  377556 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-gqcjx" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 19:25:55.763040  377556 pod_ready.go:94] pod "coredns-66bc5c9577-gqcjx" is "Ready"
	I1217 19:25:55.763065  377556 pod_ready.go:86] duration metric: took 4.262868ms for pod "coredns-66bc5c9577-gqcjx" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 19:25:55.765035  377556 pod_ready.go:83] waiting for pod "etcd-addons-695107" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 19:25:55.768637  377556 pod_ready.go:94] pod "etcd-addons-695107" is "Ready"
	I1217 19:25:55.768658  377556 pod_ready.go:86] duration metric: took 3.601877ms for pod "etcd-addons-695107" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 19:25:55.770612  377556 pod_ready.go:83] waiting for pod "kube-apiserver-addons-695107" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 19:25:55.774189  377556 pod_ready.go:94] pod "kube-apiserver-addons-695107" is "Ready"
	I1217 19:25:55.774208  377556 pod_ready.go:86] duration metric: took 3.576591ms for pod "kube-apiserver-addons-695107" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 19:25:55.775794  377556 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-695107" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 19:25:56.160327  377556 pod_ready.go:94] pod "kube-controller-manager-addons-695107" is "Ready"
	I1217 19:25:56.160361  377556 pod_ready.go:86] duration metric: took 384.547978ms for pod "kube-controller-manager-addons-695107" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 19:25:56.359891  377556 pod_ready.go:83] waiting for pod "kube-proxy-fqlbd" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 19:25:56.759880  377556 pod_ready.go:94] pod "kube-proxy-fqlbd" is "Ready"
	I1217 19:25:56.759910  377556 pod_ready.go:86] duration metric: took 399.963561ms for pod "kube-proxy-fqlbd" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 19:25:56.959998  377556 pod_ready.go:83] waiting for pod "kube-scheduler-addons-695107" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 19:25:57.360303  377556 pod_ready.go:94] pod "kube-scheduler-addons-695107" is "Ready"
	I1217 19:25:57.360334  377556 pod_ready.go:86] duration metric: took 400.303802ms for pod "kube-scheduler-addons-695107" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 19:25:57.360346  377556 pod_ready.go:40] duration metric: took 1.604568997s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1217 19:25:57.407405  377556 start.go:625] kubectl: 1.35.0, cluster: 1.34.3 (minor skew: 1)
	I1217 19:25:57.410188  377556 out.go:179] * Done! kubectl is now configured to use "addons-695107" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 17 19:25:54 addons-695107 crio[774]: time="2025-12-17T19:25:54.863621487Z" level=info msg="Starting container: 05e7c087fc88a388e9fce4a8fadcd7c6e045c449280b951b0a69fe971518c8e4" id=64e00571-9f59-45bb-80fe-14013e8354aa name=/runtime.v1.RuntimeService/StartContainer
	Dec 17 19:25:54 addons-695107 crio[774]: time="2025-12-17T19:25:54.867103322Z" level=info msg="Started container" PID=6179 containerID=05e7c087fc88a388e9fce4a8fadcd7c6e045c449280b951b0a69fe971518c8e4 description=kube-system/csi-hostpathplugin-j4557/csi-snapshotter id=64e00571-9f59-45bb-80fe-14013e8354aa name=/runtime.v1.RuntimeService/StartContainer sandboxID=a931e3408d3e6d7856171f4b0bf6b53120bcb63ee77eaaa378c9dd412eb00f78
	Dec 17 19:25:58 addons-695107 crio[774]: time="2025-12-17T19:25:58.242613292Z" level=info msg="Running pod sandbox: default/busybox/POD" id=a3b526aa-358c-4831-8f08-26c3649e6724 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 17 19:25:58 addons-695107 crio[774]: time="2025-12-17T19:25:58.242690254Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 19:25:58 addons-695107 crio[774]: time="2025-12-17T19:25:58.249454193Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:1ba8d993fc0cf37412891743d225233a45d2c5cbfad2086a9ace234e3d2286ad UID:0895821d-164d-43f0-b04c-41cd5a505dbf NetNS:/var/run/netns/9bdd8265-b216-4337-91de-d090e1064be9 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc0009de1a0}] Aliases:map[]}"
	Dec 17 19:25:58 addons-695107 crio[774]: time="2025-12-17T19:25:58.249496329Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Dec 17 19:25:58 addons-695107 crio[774]: time="2025-12-17T19:25:58.259412267Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:1ba8d993fc0cf37412891743d225233a45d2c5cbfad2086a9ace234e3d2286ad UID:0895821d-164d-43f0-b04c-41cd5a505dbf NetNS:/var/run/netns/9bdd8265-b216-4337-91de-d090e1064be9 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc0009de1a0}] Aliases:map[]}"
	Dec 17 19:25:58 addons-695107 crio[774]: time="2025-12-17T19:25:58.259543154Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Dec 17 19:25:58 addons-695107 crio[774]: time="2025-12-17T19:25:58.260426242Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 17 19:25:58 addons-695107 crio[774]: time="2025-12-17T19:25:58.26121459Z" level=info msg="Ran pod sandbox 1ba8d993fc0cf37412891743d225233a45d2c5cbfad2086a9ace234e3d2286ad with infra container: default/busybox/POD" id=a3b526aa-358c-4831-8f08-26c3649e6724 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 17 19:25:58 addons-695107 crio[774]: time="2025-12-17T19:25:58.262633414Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=a17ada33-205c-4187-a1c7-9ab20e99f104 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 19:25:58 addons-695107 crio[774]: time="2025-12-17T19:25:58.262787874Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=a17ada33-205c-4187-a1c7-9ab20e99f104 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 19:25:58 addons-695107 crio[774]: time="2025-12-17T19:25:58.262824322Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=a17ada33-205c-4187-a1c7-9ab20e99f104 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 19:25:58 addons-695107 crio[774]: time="2025-12-17T19:25:58.263481244Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=c26bd148-9235-4b28-a8b4-bbfa093b7102 name=/runtime.v1.ImageService/PullImage
	Dec 17 19:25:58 addons-695107 crio[774]: time="2025-12-17T19:25:58.265046859Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Dec 17 19:25:59 addons-695107 crio[774]: time="2025-12-17T19:25:59.482535134Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=c26bd148-9235-4b28-a8b4-bbfa093b7102 name=/runtime.v1.ImageService/PullImage
	Dec 17 19:25:59 addons-695107 crio[774]: time="2025-12-17T19:25:59.483130532Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=f95c9825-61ae-4490-a0b7-7bc7aac3f65b name=/runtime.v1.ImageService/ImageStatus
	Dec 17 19:25:59 addons-695107 crio[774]: time="2025-12-17T19:25:59.48447995Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=2df0f7b5-ad0b-430f-97b2-cd5b95fddbf2 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 19:25:59 addons-695107 crio[774]: time="2025-12-17T19:25:59.488352795Z" level=info msg="Creating container: default/busybox/busybox" id=f61f1af1-4d2e-4227-8849-15e630f08c60 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 17 19:25:59 addons-695107 crio[774]: time="2025-12-17T19:25:59.488498566Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 19:25:59 addons-695107 crio[774]: time="2025-12-17T19:25:59.494491606Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 19:25:59 addons-695107 crio[774]: time="2025-12-17T19:25:59.495032818Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 19:25:59 addons-695107 crio[774]: time="2025-12-17T19:25:59.54652592Z" level=info msg="Created container 6ceebc8cecc61aeae844b0e6437f1addc03595c01a0d94d049433b9dfb65c76d: default/busybox/busybox" id=f61f1af1-4d2e-4227-8849-15e630f08c60 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 17 19:25:59 addons-695107 crio[774]: time="2025-12-17T19:25:59.547279875Z" level=info msg="Starting container: 6ceebc8cecc61aeae844b0e6437f1addc03595c01a0d94d049433b9dfb65c76d" id=7ff28f08-ddd8-4807-8c74-ed5d27c2c614 name=/runtime.v1.RuntimeService/StartContainer
	Dec 17 19:25:59 addons-695107 crio[774]: time="2025-12-17T19:25:59.549506754Z" level=info msg="Started container" PID=6295 containerID=6ceebc8cecc61aeae844b0e6437f1addc03595c01a0d94d049433b9dfb65c76d description=default/busybox/busybox id=7ff28f08-ddd8-4807-8c74-ed5d27c2c614 name=/runtime.v1.RuntimeService/StartContainer sandboxID=1ba8d993fc0cf37412891743d225233a45d2c5cbfad2086a9ace234e3d2286ad
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED              STATE               NAME                                     ATTEMPT             POD ID              POD                                         NAMESPACE
	6ceebc8cecc61       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998                                          8 seconds ago        Running             busybox                                  0                   1ba8d993fc0cf       busybox                                     default
	05e7c087fc88a       registry.k8s.io/sig-storage/csi-snapshotter@sha256:d844cb1faeb4ecf44bae6aea370c9c6128a87e665e40370021427d79a8819ee5                          13 seconds ago       Running             csi-snapshotter                          0                   a931e3408d3e6       csi-hostpathplugin-j4557                    kube-system
	030ee45fef382       registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7                          14 seconds ago       Running             csi-provisioner                          0                   a931e3408d3e6       csi-hostpathplugin-j4557                    kube-system
	1bf59a626763d       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:441f351b4520c228d29ba8c02a438d9ba971dafbbba5c91eaf882b1528797fb8                                 15 seconds ago       Running             gcp-auth                                 0                   68281e22beef0       gcp-auth-78565c9fb4-47zbj                   gcp-auth
	e582a6b346e42       registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6                            16 seconds ago       Running             liveness-probe                           0                   a931e3408d3e6       csi-hostpathplugin-j4557                    kube-system
	6f1389fbed5a8       registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11                           17 seconds ago       Running             hostpath                                 0                   a931e3408d3e6       csi-hostpathplugin-j4557                    kube-system
	bb406a59b4704       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc                18 seconds ago       Running             node-driver-registrar                    0                   a931e3408d3e6       csi-hostpathplugin-j4557                    kube-system
	346a00d466786       registry.k8s.io/ingress-nginx/controller@sha256:d552aeecf01939bd11bdc4fa57ce7437d42651194a61edcd6b7aea44b9e74cad                             18 seconds ago       Running             controller                               0                   19f1fe8ebf827       ingress-nginx-controller-85d4c799dd-8mcfr   ingress-nginx
	aba3d9ac9ad0f       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:ea428be7b01d41418fca4d91ae3dff6b037bdc0d42757e7ad392a38536488a1a                            22 seconds ago       Running             gadget                                   0                   a04bb218fcebf       gadget-7dc2q                                gadget
	7927a0e1520a1       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                                     24 seconds ago       Running             amd-gpu-device-plugin                    0                   8d0e5b5714e9f       amd-gpu-device-plugin-xl62h                 kube-system
	4fd8c32f1f75b       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864   25 seconds ago       Running             csi-external-health-monitor-controller   0                   a931e3408d3e6       csi-hostpathplugin-j4557                    kube-system
	a34979fddc504       a3e52b258ac92bfd8c401650a3af7e8fce635d90559da61f252ad97e7d6c179e                                                                             25 seconds ago       Exited              patch                                    1                   a3c7190d7b4ba       gcp-auth-certs-patch-st9qp                  gcp-auth
	88de9cb787252       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:e2d8d9e1553c1ac5f9f41bc34d38d1eda519ed77a3106b036c43b6667dad19a9                   26 seconds ago       Exited              create                                   0                   0a529e610d8d7       gcp-auth-certs-create-bkcmb                 gcp-auth
	3e0c0283ddfb5       gcr.io/k8s-minikube/kube-registry-proxy@sha256:8f72a79b63ca56074435e82b87fca2642a8117e60be313d3586dbe2bfff11cac                              26 seconds ago       Running             registry-proxy                           0                   72c17c4898d8e       registry-proxy-8dlbt                        kube-system
	1309939d3b4da       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      28 seconds ago       Running             volume-snapshot-controller               0                   f89451a363726       snapshot-controller-7d9fbc56b8-fgm6r        kube-system
	8f0c2abe1917b       nvcr.io/nvidia/k8s-device-plugin@sha256:c3c1a099015d1810c249ba294beaad656ce0354f7e8a77803dacabe60a4f8c9f                                     28 seconds ago       Running             nvidia-device-plugin-ctr                 0                   37af11647fb9c       nvidia-device-plugin-daemonset-5hdv7        kube-system
	801db4b070e91       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      31 seconds ago       Running             volume-snapshot-controller               0                   57d29a23d4915       snapshot-controller-7d9fbc56b8-pvnhq        kube-system
	c7eea19f4d49e       registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8                              33 seconds ago       Running             csi-resizer                              0                   5b2c9baa92816       csi-hostpath-resizer-0                      kube-system
	51a71566b557a       registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0                             33 seconds ago       Running             csi-attacher                             0                   b62c025e344f6       csi-hostpath-attacher-0                     kube-system
	48b0e3db6f0aa       gcr.io/cloud-spanner-emulator/emulator@sha256:22a4d5b0f97bd0c2ee20da342493c5a60e40b4d62ec20c174cb32ff4ee1f65bf                               34 seconds ago       Running             cloud-spanner-emulator                   0                   ce2ca032f5616       cloud-spanner-emulator-5bdddb765-kzhtq      default
	f513241821060       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:e2d8d9e1553c1ac5f9f41bc34d38d1eda519ed77a3106b036c43b6667dad19a9                   37 seconds ago       Exited              patch                                    0                   86551c7f8f2af       ingress-nginx-admission-patch-6bdmz         ingress-nginx
	8cf6f22d4cee1       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:e2d8d9e1553c1ac5f9f41bc34d38d1eda519ed77a3106b036c43b6667dad19a9                   37 seconds ago       Exited              create                                   0                   786d7b47bd169       ingress-nginx-admission-create-rz9rh        ingress-nginx
	6368372bba7d6       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef                             38 seconds ago       Running             local-path-provisioner                   0                   0379ce408a490       local-path-provisioner-648f6765c9-26mcp     local-path-storage
	a485e9f994ff9       registry.k8s.io/metrics-server/metrics-server@sha256:5dd31abb8093690d9624a53277a00d2257e7e57e6766be3f9f54cf9f54cddbc1                        39 seconds ago       Running             metrics-server                           0                   714ab275b341d       metrics-server-85b7d694d7-tqbbx             kube-system
	04f733eceac24       docker.io/library/registry@sha256:f57ffd2bb01704b6082396158e77ca6d1112bc6fe32315c322864de804750d8a                                           40 seconds ago       Running             registry                                 0                   41c8193fe9db5       registry-6b586f9694-2jvdr                   kube-system
	c3f541802ca32       docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7                               42 seconds ago       Running             minikube-ingress-dns                     0                   cc4a1df792692       kube-ingress-dns-minikube                   kube-system
	fbf994d990fd1       docker.io/marcnuri/yakd@sha256:ef51bed688eb0feab1405f97b7286dfe1da3c61e5a189ce4ae34a90c9f9cf8aa                                              46 seconds ago       Running             yakd                                     0                   eb38fc22772c6       yakd-dashboard-6654c87f9b-mcjdv             yakd-dashboard
	e3aca076801c7       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                                             48 seconds ago       Running             storage-provisioner                      0                   1ad4f96fbe212       storage-provisioner                         kube-system
	f32dab99d943e       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                                             48 seconds ago       Running             coredns                                  0                   a7e7a5180fc84       coredns-66bc5c9577-gqcjx                    kube-system
	b68b1b351d2b0       docker.io/kindest/kindnetd@sha256:7c22558dc06a570d46ea6e8a73b23cdc754eb81f7c08d3441a3171ad359ffc27                                           59 seconds ago       Running             kindnet-cni                              0                   737cc0abe5ef6       kindnet-dkw9t                               kube-system
	bc8813162646d       36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691                                                                             About a minute ago   Running             kube-proxy                               0                   9f87571df8090       kube-proxy-fqlbd                            kube-system
	bea3125cf2914       aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c                                                                             About a minute ago   Running             kube-apiserver                           0                   0fb0ad136d8a3       kube-apiserver-addons-695107                kube-system
	5875440c2f308       aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78                                                                             About a minute ago   Running             kube-scheduler                           0                   b127cca34e4a1       kube-scheduler-addons-695107                kube-system
	87468d7032ea6       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                                                             About a minute ago   Running             etcd                                     0                   1ff3a1be0c30a       etcd-addons-695107                          kube-system
	fd7cf6d64d69e       5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942                                                                             About a minute ago   Running             kube-controller-manager                  0                   85d669b40b72a       kube-controller-manager-addons-695107       kube-system
	
	
	==> coredns [f32dab99d943eec56bf9918ed2f6b53e96fd877cfbbf5192cf7d857f1b776f8e] <==
	[INFO] 10.244.0.12:52691 - 43127 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000124479s
	[INFO] 10.244.0.12:58813 - 48973 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000111453s
	[INFO] 10.244.0.12:58813 - 48698 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000143569s
	[INFO] 10.244.0.12:35885 - 17295 "AAAA IN registry.kube-system.svc.cluster.local.us-east4-a.c.k8s-minikube.internal. udp 91 false 512" NXDOMAIN qr,aa,rd,ra 198 0.000078718s
	[INFO] 10.244.0.12:35885 - 17005 "A IN registry.kube-system.svc.cluster.local.us-east4-a.c.k8s-minikube.internal. udp 91 false 512" NXDOMAIN qr,aa,rd,ra 198 0.000109058s
	[INFO] 10.244.0.12:54051 - 35430 "AAAA IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,aa,rd,ra 185 0.000071569s
	[INFO] 10.244.0.12:54051 - 35050 "A IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,aa,rd,ra 185 0.00009506s
	[INFO] 10.244.0.12:33951 - 48775 "AAAA IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,aa,rd,ra 177 0.000057988s
	[INFO] 10.244.0.12:33951 - 48474 "A IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,aa,rd,ra 177 0.000096089s
	[INFO] 10.244.0.12:54033 - 31461 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000144473s
	[INFO] 10.244.0.12:54033 - 30998 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000210919s
	[INFO] 10.244.0.22:60339 - 10799 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000198637s
	[INFO] 10.244.0.22:49554 - 54945 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000263536s
	[INFO] 10.244.0.22:56028 - 24187 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000117536s
	[INFO] 10.244.0.22:34208 - 26874 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000169426s
	[INFO] 10.244.0.22:48619 - 49138 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000136125s
	[INFO] 10.244.0.22:40935 - 62714 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000127924s
	[INFO] 10.244.0.22:48515 - 50585 "AAAA IN storage.googleapis.com.us-east4-a.c.k8s-minikube.internal. udp 86 false 1232" NXDOMAIN qr,rd,ra 182 0.008736479s
	[INFO] 10.244.0.22:40378 - 21566 "A IN storage.googleapis.com.us-east4-a.c.k8s-minikube.internal. udp 86 false 1232" NXDOMAIN qr,rd,ra 182 0.010368846s
	[INFO] 10.244.0.22:51793 - 49561 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.007977127s
	[INFO] 10.244.0.22:53031 - 591 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.008559816s
	[INFO] 10.244.0.22:39894 - 12422 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.005723779s
	[INFO] 10.244.0.22:44189 - 61358 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.006756026s
	[INFO] 10.244.0.22:51634 - 450 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 534 0.002162985s
	[INFO] 10.244.0.22:57480 - 58787 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.002659393s
	
	
	==> describe nodes <==
	Name:               addons-695107
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-695107
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2e96f676eb7e96389e85fe0658a4ede4c4ba6924
	                    minikube.k8s.io/name=addons-695107
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_17T19_25_01_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-695107
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-695107"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Dec 2025 19:24:57 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-695107
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Dec 2025 19:26:01 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Dec 2025 19:26:01 +0000   Wed, 17 Dec 2025 19:24:56 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Dec 2025 19:26:01 +0000   Wed, 17 Dec 2025 19:24:56 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Dec 2025 19:26:01 +0000   Wed, 17 Dec 2025 19:24:56 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Dec 2025 19:26:01 +0000   Wed, 17 Dec 2025 19:25:18 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-695107
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 99cc213c06a11cdf07b2a4d26942818a
	  System UUID:                e217694c-a589-401c-9719-5d685e266755
	  Boot ID:                    832664c8-407a-4bff-a432-3bbc3f20421e
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.3
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (27 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         10s
	  default                     cloud-spanner-emulator-5bdddb765-kzhtq       0 (0%)        0 (0%)      0 (0%)           0 (0%)         61s
	  gadget                      gadget-7dc2q                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         60s
	  gcp-auth                    gcp-auth-78565c9fb4-47zbj                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         53s
	  ingress-nginx               ingress-nginx-controller-85d4c799dd-8mcfr    100m (1%)     0 (0%)      90Mi (0%)        0 (0%)         60s
	  kube-system                 amd-gpu-device-plugin-xl62h                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         49s
	  kube-system                 coredns-66bc5c9577-gqcjx                     100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     62s
	  kube-system                 csi-hostpath-attacher-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         60s
	  kube-system                 csi-hostpath-resizer-0                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         59s
	  kube-system                 csi-hostpathplugin-j4557                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         49s
	  kube-system                 etcd-addons-695107                           100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         68s
	  kube-system                 kindnet-dkw9t                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      62s
	  kube-system                 kube-apiserver-addons-695107                 250m (3%)     0 (0%)      0 (0%)           0 (0%)         67s
	  kube-system                 kube-controller-manager-addons-695107        200m (2%)     0 (0%)      0 (0%)           0 (0%)         67s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         60s
	  kube-system                 kube-proxy-fqlbd                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         62s
	  kube-system                 kube-scheduler-addons-695107                 100m (1%)     0 (0%)      0 (0%)           0 (0%)         67s
	  kube-system                 metrics-server-85b7d694d7-tqbbx              100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         60s
	  kube-system                 nvidia-device-plugin-daemonset-5hdv7         0 (0%)        0 (0%)      0 (0%)           0 (0%)         49s
	  kube-system                 registry-6b586f9694-2jvdr                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         60s
	  kube-system                 registry-creds-764b6fb674-lglwq              0 (0%)        0 (0%)      0 (0%)           0 (0%)         61s
	  kube-system                 registry-proxy-8dlbt                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         49s
	  kube-system                 snapshot-controller-7d9fbc56b8-fgm6r         0 (0%)        0 (0%)      0 (0%)           0 (0%)         59s
	  kube-system                 snapshot-controller-7d9fbc56b8-pvnhq         0 (0%)        0 (0%)      0 (0%)           0 (0%)         59s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         60s
	  local-path-storage          local-path-provisioner-648f6765c9-26mcp      0 (0%)        0 (0%)      0 (0%)           0 (0%)         60s
	  yakd-dashboard              yakd-dashboard-6654c87f9b-mcjdv              0 (0%)        0 (0%)      128Mi (0%)       256Mi (0%)     60s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (13%)  100m (1%)
	  memory             638Mi (1%)   476Mi (1%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 60s   kube-proxy       
	  Normal  Starting                 67s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  67s   kubelet          Node addons-695107 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    67s   kubelet          Node addons-695107 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     67s   kubelet          Node addons-695107 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           63s   node-controller  Node addons-695107 event: Registered Node addons-695107 in Controller
	  Normal  NodeReady                49s   kubelet          Node addons-695107 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000023] ll header: 00000000: 1a 8a b0 e4 a1 d6 22 5c 53 fa 7b f3 08 00
	[ +32.252595] IPv4: martian source 10.244.0.22 from 127.0.0.1, on dev eth0
	[  +0.000027] ll header: 00000000: 1a 8a b0 e4 a1 d6 22 5c 53 fa 7b f3 08 00
	[Dec17 19:18] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 62 c4 d3 e5 73 4f 08 06
	[  +5.672106] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 2e 8e ae 4c ea 64 08 06
	[Dec17 19:19] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff fa a2 3b ef db b5 08 06
	[  +0.000499] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 62 c4 d3 e5 73 4f 08 06
	[ +31.241444] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000013] ll header: 00000000: ff ff ff ff ff ff 46 d7 50 f9 50 96 08 06
	[  +7.057801] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff e2 eb 65 78 0f 2d 08 06
	[  +0.000409] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 2e 8e ae 4c ea 64 08 06
	[Dec17 19:20] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 76 15 cf cb 1d f9 08 06
	[  +0.000402] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 8e 5d 6d 84 aa a1 08 06
	[ +11.290534] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 02 bf cf fd 8a f3 08 06
	[  +0.000372] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 46 d7 50 f9 50 96 08 06
	
	
	==> etcd [87468d7032ea669744a3be9490a79472140a58976b8a3c756b65a43dbda2d50e] <==
	{"level":"warn","ts":"2025-12-17T19:24:57.114892Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35644","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T19:24:57.122124Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35668","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T19:24:57.128973Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35678","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T19:24:57.135456Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35698","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T19:24:57.149021Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35734","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T19:24:57.155227Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35740","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T19:24:57.162037Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35764","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T19:24:57.170126Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35784","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T19:24:57.190543Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35814","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T19:24:57.194327Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35830","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T19:24:57.201139Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35854","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T19:24:57.208481Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35878","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T19:24:57.263153Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35894","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T19:25:08.818923Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54168","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T19:25:08.825865Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54188","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T19:25:34.660968Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41682","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T19:25:34.670410Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41704","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T19:25:34.685394Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41716","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T19:25:34.693937Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41736","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-17T19:26:00.639105Z","caller":"traceutil/trace.go:172","msg":"trace[1689525748] transaction","detail":"{read_only:false; response_revision:1213; number_of_response:1; }","duration":"124.834485ms","start":"2025-12-17T19:26:00.514220Z","end":"2025-12-17T19:26:00.639055Z","steps":["trace[1689525748] 'process raft request'  (duration: 60.544661ms)","trace[1689525748] 'compare'  (duration: 64.194071ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-17T19:26:00.639300Z","caller":"traceutil/trace.go:172","msg":"trace[237250783] transaction","detail":"{read_only:false; response_revision:1214; number_of_response:1; }","duration":"124.876352ms","start":"2025-12-17T19:26:00.514409Z","end":"2025-12-17T19:26:00.639285Z","steps":["trace[237250783] 'process raft request'  (duration: 124.656588ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-17T19:26:00.639321Z","caller":"traceutil/trace.go:172","msg":"trace[274642689] transaction","detail":"{read_only:false; response_revision:1218; number_of_response:1; }","duration":"121.211403ms","start":"2025-12-17T19:26:00.518098Z","end":"2025-12-17T19:26:00.639309Z","steps":["trace[274642689] 'process raft request'  (duration: 121.173341ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-17T19:26:00.639488Z","caller":"traceutil/trace.go:172","msg":"trace[1760482284] transaction","detail":"{read_only:false; response_revision:1215; number_of_response:1; }","duration":"125.053728ms","start":"2025-12-17T19:26:00.514421Z","end":"2025-12-17T19:26:00.639475Z","steps":["trace[1760482284] 'process raft request'  (duration: 124.722702ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-17T19:26:00.639359Z","caller":"traceutil/trace.go:172","msg":"trace[921640965] transaction","detail":"{read_only:false; response_revision:1217; number_of_response:1; }","duration":"123.333842ms","start":"2025-12-17T19:26:00.516017Z","end":"2025-12-17T19:26:00.639351Z","steps":["trace[921640965] 'process raft request'  (duration: 123.214702ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-17T19:26:00.639575Z","caller":"traceutil/trace.go:172","msg":"trace[813321175] transaction","detail":"{read_only:false; response_revision:1216; number_of_response:1; }","duration":"125.145965ms","start":"2025-12-17T19:26:00.514419Z","end":"2025-12-17T19:26:00.639565Z","steps":["trace[813321175] 'process raft request'  (duration: 124.772555ms)"],"step_count":1}
	
	
	==> gcp-auth [1bf59a626763deba3e4128c122638ed60fa800d27b9db4eca8e6bd2a3a6bb2ff] <==
	2025/12/17 19:25:52 GCP Auth Webhook started!
	2025/12/17 19:25:57 Ready to marshal response ...
	2025/12/17 19:25:57 Ready to write response ...
	2025/12/17 19:25:57 Ready to marshal response ...
	2025/12/17 19:25:57 Ready to write response ...
	2025/12/17 19:25:57 Ready to marshal response ...
	2025/12/17 19:25:57 Ready to write response ...
	
	
	==> kernel <==
	 19:26:08 up  1:08,  0 user,  load average: 3.07, 2.89, 2.36
	Linux addons-695107 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [b68b1b351d2b0d7d4628fdbe0a6689c4e3150e140e9149ec00e8886c21c85388] <==
	I1217 19:25:08.374319       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1217 19:25:08.374587       1 main.go:139] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I1217 19:25:08.374730       1 main.go:148] setting mtu 1500 for CNI 
	I1217 19:25:08.374751       1 main.go:178] kindnetd IP family: "ipv4"
	I1217 19:25:08.374770       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-17T19:25:08Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1217 19:25:08.579326       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1217 19:25:08.579758       1 controller.go:381] "Waiting for informer caches to sync"
	I1217 19:25:08.579812       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1217 19:25:08.669491       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1217 19:25:09.169174       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1217 19:25:09.169228       1 metrics.go:72] Registering metrics
	I1217 19:25:09.169287       1 controller.go:711] "Syncing nftables rules"
	I1217 19:25:18.582809       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1217 19:25:18.582872       1 main.go:301] handling current node
	I1217 19:25:28.580265       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1217 19:25:28.580320       1 main.go:301] handling current node
	I1217 19:25:38.580158       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1217 19:25:38.580223       1 main.go:301] handling current node
	I1217 19:25:48.579362       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1217 19:25:48.579409       1 main.go:301] handling current node
	I1217 19:25:58.579672       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1217 19:25:58.579728       1 main.go:301] handling current node
	
	
	==> kube-apiserver [bea3125cf2914bd997ad7c9b382bc666af7c3ef97d39311b120cecf6bfd19b22] <==
	I1217 19:25:14.174025       1 alloc.go:328] "allocated clusterIPs" service="gcp-auth/gcp-auth" clusterIPs={"IPv4":"10.101.148.239"}
	W1217 19:25:18.760531       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.101.148.239:443: connect: connection refused
	E1217 19:25:18.760697       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.101.148.239:443: connect: connection refused" logger="UnhandledError"
	W1217 19:25:18.760603       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.101.148.239:443: connect: connection refused
	E1217 19:25:18.760835       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.101.148.239:443: connect: connection refused" logger="UnhandledError"
	W1217 19:25:18.782717       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.101.148.239:443: connect: connection refused
	E1217 19:25:18.782872       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.101.148.239:443: connect: connection refused" logger="UnhandledError"
	W1217 19:25:18.786248       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.101.148.239:443: connect: connection refused
	E1217 19:25:18.786351       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.101.148.239:443: connect: connection refused" logger="UnhandledError"
	W1217 19:25:30.353050       1 handler_proxy.go:99] no RequestInfo found in the context
	E1217 19:25:30.353149       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1217 19:25:30.353122       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.102.138.161:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.102.138.161:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.102.138.161:443: connect: connection refused" logger="UnhandledError"
	E1217 19:25:30.354596       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.102.138.161:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.102.138.161:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.102.138.161:443: connect: connection refused" logger="UnhandledError"
	E1217 19:25:30.360443       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.102.138.161:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.102.138.161:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.102.138.161:443: connect: connection refused" logger="UnhandledError"
	E1217 19:25:30.381468       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.102.138.161:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.102.138.161:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.102.138.161:443: connect: connection refused" logger="UnhandledError"
	I1217 19:25:30.450710       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W1217 19:25:34.660925       1 logging.go:55] [core] [Channel #267 SubChannel #268]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1217 19:25:34.670341       1 logging.go:55] [core] [Channel #271 SubChannel #272]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1217 19:25:34.685325       1 logging.go:55] [core] [Channel #275 SubChannel #276]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1217 19:25:34.693914       1 logging.go:55] [core] [Channel #279 SubChannel #280]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	E1217 19:26:06.093000       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:50062: use of closed network connection
	E1217 19:26:06.242238       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:50076: use of closed network connection
	
	
	==> kube-controller-manager [fd7cf6d64d69e77f0f93c54b2f5c32210f59f02ec07dbd9708e6d7d40d2b4e33] <==
	I1217 19:25:04.641848       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1217 19:25:04.641947       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1217 19:25:04.641956       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1217 19:25:04.641992       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1217 19:25:04.642131       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1217 19:25:04.643455       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1217 19:25:04.645785       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1217 19:25:04.646988       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1217 19:25:04.647009       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1217 19:25:04.647044       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1217 19:25:04.647062       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1217 19:25:04.647134       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1217 19:25:04.649414       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1217 19:25:04.654634       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1217 19:25:04.654728       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1217 19:25:04.654836       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="addons-695107"
	I1217 19:25:04.654886       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1217 19:25:04.666791       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E1217 19:25:07.358478       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/metrics-server-85b7d694d7\" failed with pods \"metrics-server-85b7d694d7-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found" logger="UnhandledError"
	I1217 19:25:19.657634       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I1217 19:25:34.652784       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1217 19:25:34.652874       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1217 19:25:34.677066       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1217 19:25:34.753332       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1217 19:25:34.777835       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [bc8813162646db6787344c15bb78bf1f1a23063d72326a728b0a42dafc7c4d56] <==
	I1217 19:25:06.273168       1 server_linux.go:53] "Using iptables proxy"
	I1217 19:25:06.835714       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1217 19:25:06.953997       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1217 19:25:06.954036       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1217 19:25:06.954135       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1217 19:25:07.196067       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1217 19:25:07.196301       1 server_linux.go:132] "Using iptables Proxier"
	I1217 19:25:07.270703       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1217 19:25:07.273119       1 server.go:527] "Version info" version="v1.34.3"
	I1217 19:25:07.273214       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1217 19:25:07.276381       1 config.go:200] "Starting service config controller"
	I1217 19:25:07.276442       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1217 19:25:07.276953       1 config.go:403] "Starting serviceCIDR config controller"
	I1217 19:25:07.277012       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1217 19:25:07.277051       1 config.go:106] "Starting endpoint slice config controller"
	I1217 19:25:07.277058       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1217 19:25:07.277174       1 config.go:309] "Starting node config controller"
	I1217 19:25:07.277181       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1217 19:25:07.277188       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1217 19:25:07.376837       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1217 19:25:07.378237       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1217 19:25:07.378237       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [5875440c2f308ff9ae46bdeb21b7960b61f51fff5f745adf6f9deb63f35cfb16] <==
	E1217 19:24:57.662372       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1217 19:24:57.662463       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1217 19:24:57.662478       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1217 19:24:57.662483       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1217 19:24:57.662586       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1217 19:24:57.662597       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1217 19:24:57.662624       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1217 19:24:57.662663       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1217 19:24:57.662723       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1217 19:24:57.662717       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1217 19:24:57.662812       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1217 19:24:58.521384       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1217 19:24:58.522696       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1217 19:24:58.532914       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1217 19:24:58.546456       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1217 19:24:58.583159       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1217 19:24:58.591441       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1217 19:24:58.599729       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1217 19:24:58.604974       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1217 19:24:58.626039       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1217 19:24:58.635244       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1217 19:24:58.732029       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1217 19:24:58.801925       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1217 19:24:58.923605       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	I1217 19:25:00.358155       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 17 19:25:42 addons-695107 kubelet[1289]: I1217 19:25:42.385009    1289 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/registry-proxy-8dlbt" podStartSLOduration=2.25021033 podStartE2EDuration="24.384985618s" podCreationTimestamp="2025-12-17 19:25:18 +0000 UTC" firstStartedPulling="2025-12-17 19:25:19.207424744 +0000 UTC m=+19.132963328" lastFinishedPulling="2025-12-17 19:25:41.342200035 +0000 UTC m=+41.267738616" observedRunningTime="2025-12-17 19:25:42.383824615 +0000 UTC m=+42.309363217" watchObservedRunningTime="2025-12-17 19:25:42.384985618 +0000 UTC m=+42.310524220"
	Dec 17 19:25:43 addons-695107 kubelet[1289]: I1217 19:25:43.383288    1289 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-xl62h" secret="" err="secret \"gcp-auth\" not found"
	Dec 17 19:25:43 addons-695107 kubelet[1289]: I1217 19:25:43.385594    1289 scope.go:117] "RemoveContainer" containerID="80f7cc5dedb881d09ed5077169c9f972d179af55f2631aaa0f25436470f4dda9"
	Dec 17 19:25:43 addons-695107 kubelet[1289]: I1217 19:25:43.385809    1289 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-8dlbt" secret="" err="secret \"gcp-auth\" not found"
	Dec 17 19:25:43 addons-695107 kubelet[1289]: I1217 19:25:43.396168    1289 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/amd-gpu-device-plugin-xl62h" podStartSLOduration=1.4182836189999999 podStartE2EDuration="25.396145296s" podCreationTimestamp="2025-12-17 19:25:18 +0000 UTC" firstStartedPulling="2025-12-17 19:25:19.21713291 +0000 UTC m=+19.142671510" lastFinishedPulling="2025-12-17 19:25:43.194994598 +0000 UTC m=+43.120533187" observedRunningTime="2025-12-17 19:25:43.396144381 +0000 UTC m=+43.321682984" watchObservedRunningTime="2025-12-17 19:25:43.396145296 +0000 UTC m=+43.321683899"
	Dec 17 19:25:43 addons-695107 kubelet[1289]: I1217 19:25:43.519432    1289 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2m9q8\" (UniqueName: \"kubernetes.io/projected/17c6cb11-f643-4754-b7f1-0a57163d93cd-kube-api-access-2m9q8\") pod \"17c6cb11-f643-4754-b7f1-0a57163d93cd\" (UID: \"17c6cb11-f643-4754-b7f1-0a57163d93cd\") "
	Dec 17 19:25:43 addons-695107 kubelet[1289]: I1217 19:25:43.522573    1289 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/17c6cb11-f643-4754-b7f1-0a57163d93cd-kube-api-access-2m9q8" (OuterVolumeSpecName: "kube-api-access-2m9q8") pod "17c6cb11-f643-4754-b7f1-0a57163d93cd" (UID: "17c6cb11-f643-4754-b7f1-0a57163d93cd"). InnerVolumeSpecName "kube-api-access-2m9q8". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Dec 17 19:25:43 addons-695107 kubelet[1289]: I1217 19:25:43.621140    1289 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-2m9q8\" (UniqueName: \"kubernetes.io/projected/17c6cb11-f643-4754-b7f1-0a57163d93cd-kube-api-access-2m9q8\") on node \"addons-695107\" DevicePath \"\""
	Dec 17 19:25:44 addons-695107 kubelet[1289]: I1217 19:25:44.391468    1289 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0a529e610d8d7f49c4df2bd427682abf0742847be17265f8cb642f2036c6119b"
	Dec 17 19:25:44 addons-695107 kubelet[1289]: I1217 19:25:44.393401    1289 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-xl62h" secret="" err="secret \"gcp-auth\" not found"
	Dec 17 19:25:45 addons-695107 kubelet[1289]: I1217 19:25:45.135447    1289 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jbd2v\" (UniqueName: \"kubernetes.io/projected/8c1f2ad2-9382-4863-9b27-57e07317055a-kube-api-access-jbd2v\") pod \"8c1f2ad2-9382-4863-9b27-57e07317055a\" (UID: \"8c1f2ad2-9382-4863-9b27-57e07317055a\") "
	Dec 17 19:25:45 addons-695107 kubelet[1289]: I1217 19:25:45.182686    1289 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8c1f2ad2-9382-4863-9b27-57e07317055a-kube-api-access-jbd2v" (OuterVolumeSpecName: "kube-api-access-jbd2v") pod "8c1f2ad2-9382-4863-9b27-57e07317055a" (UID: "8c1f2ad2-9382-4863-9b27-57e07317055a"). InnerVolumeSpecName "kube-api-access-jbd2v". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Dec 17 19:25:45 addons-695107 kubelet[1289]: I1217 19:25:45.236814    1289 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-jbd2v\" (UniqueName: \"kubernetes.io/projected/8c1f2ad2-9382-4863-9b27-57e07317055a-kube-api-access-jbd2v\") on node \"addons-695107\" DevicePath \"\""
	Dec 17 19:25:45 addons-695107 kubelet[1289]: I1217 19:25:45.397529    1289 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a3c7190d7b4baedca435438b6d92bdf49a63ac89b44017deb0489e52ea07d865"
	Dec 17 19:25:46 addons-695107 kubelet[1289]: I1217 19:25:46.428031    1289 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="gadget/gadget-7dc2q" podStartSLOduration=19.699845656 podStartE2EDuration="39.428006186s" podCreationTimestamp="2025-12-17 19:25:07 +0000 UTC" firstStartedPulling="2025-12-17 19:25:25.640834104 +0000 UTC m=+25.566372701" lastFinishedPulling="2025-12-17 19:25:45.368994627 +0000 UTC m=+45.294533231" observedRunningTime="2025-12-17 19:25:46.426673512 +0000 UTC m=+46.352212116" watchObservedRunningTime="2025-12-17 19:25:46.428006186 +0000 UTC m=+46.353544790"
	Dec 17 19:25:49 addons-695107 kubelet[1289]: I1217 19:25:49.429889    1289 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="ingress-nginx/ingress-nginx-controller-85d4c799dd-8mcfr" podStartSLOduration=28.293290485 podStartE2EDuration="42.429866965s" podCreationTimestamp="2025-12-17 19:25:07 +0000 UTC" firstStartedPulling="2025-12-17 19:25:34.736366191 +0000 UTC m=+34.661904777" lastFinishedPulling="2025-12-17 19:25:48.872942672 +0000 UTC m=+48.798481257" observedRunningTime="2025-12-17 19:25:49.428904276 +0000 UTC m=+49.354442878" watchObservedRunningTime="2025-12-17 19:25:49.429866965 +0000 UTC m=+49.355405567"
	Dec 17 19:25:50 addons-695107 kubelet[1289]: E1217 19:25:50.685212    1289 secret.go:189] Couldn't get secret kube-system/registry-creds-gcr: secret "registry-creds-gcr" not found
	Dec 17 19:25:50 addons-695107 kubelet[1289]: E1217 19:25:50.685312    1289 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/58c8feae-1fa3-4ac5-b69e-212b116a2c16-gcr-creds podName:58c8feae-1fa3-4ac5-b69e-212b116a2c16 nodeName:}" failed. No retries permitted until 2025-12-17 19:26:22.685292468 +0000 UTC m=+82.610831064 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "gcr-creds" (UniqueName: "kubernetes.io/secret/58c8feae-1fa3-4ac5-b69e-212b116a2c16-gcr-creds") pod "registry-creds-764b6fb674-lglwq" (UID: "58c8feae-1fa3-4ac5-b69e-212b116a2c16") : secret "registry-creds-gcr" not found
	Dec 17 19:25:51 addons-695107 kubelet[1289]: I1217 19:25:51.206288    1289 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: hostpath.csi.k8s.io endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0
	Dec 17 19:25:51 addons-695107 kubelet[1289]: I1217 19:25:51.206336    1289 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: hostpath.csi.k8s.io at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock
	Dec 17 19:25:53 addons-695107 kubelet[1289]: I1217 19:25:53.465528    1289 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="gcp-auth/gcp-auth-78565c9fb4-47zbj" podStartSLOduration=37.798619326 podStartE2EDuration="39.465501821s" podCreationTimestamp="2025-12-17 19:25:14 +0000 UTC" firstStartedPulling="2025-12-17 19:25:51.04885097 +0000 UTC m=+50.974389551" lastFinishedPulling="2025-12-17 19:25:52.715733454 +0000 UTC m=+52.641272046" observedRunningTime="2025-12-17 19:25:53.464246441 +0000 UTC m=+53.389785046" watchObservedRunningTime="2025-12-17 19:25:53.465501821 +0000 UTC m=+53.391040424"
	Dec 17 19:25:55 addons-695107 kubelet[1289]: I1217 19:25:55.475248    1289 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/csi-hostpathplugin-j4557" podStartSLOduration=1.877229898 podStartE2EDuration="37.475221338s" podCreationTimestamp="2025-12-17 19:25:18 +0000 UTC" firstStartedPulling="2025-12-17 19:25:19.213735765 +0000 UTC m=+19.139274361" lastFinishedPulling="2025-12-17 19:25:54.811727219 +0000 UTC m=+54.737265801" observedRunningTime="2025-12-17 19:25:55.474573036 +0000 UTC m=+55.400111639" watchObservedRunningTime="2025-12-17 19:25:55.475221338 +0000 UTC m=+55.400759941"
	Dec 17 19:25:58 addons-695107 kubelet[1289]: I1217 19:25:58.038282    1289 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/0895821d-164d-43f0-b04c-41cd5a505dbf-gcp-creds\") pod \"busybox\" (UID: \"0895821d-164d-43f0-b04c-41cd5a505dbf\") " pod="default/busybox"
	Dec 17 19:25:58 addons-695107 kubelet[1289]: I1217 19:25:58.038383    1289 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pw6tm\" (UniqueName: \"kubernetes.io/projected/0895821d-164d-43f0-b04c-41cd5a505dbf-kube-api-access-pw6tm\") pod \"busybox\" (UID: \"0895821d-164d-43f0-b04c-41cd5a505dbf\") " pod="default/busybox"
	Dec 17 19:26:06 addons-695107 kubelet[1289]: E1217 19:26:06.092946    1289 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:50454->127.0.0.1:45651: write tcp 127.0.0.1:50454->127.0.0.1:45651: write: broken pipe
	
	
	==> storage-provisioner [e3aca076801c71c61c7d166207a81c454eca7b4579247b6da815893233243960] <==
	W1217 19:25:43.406254       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 19:25:45.409586       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 19:25:45.415300       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 19:25:47.419892       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 19:25:47.482768       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 19:25:49.486138       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 19:25:49.490424       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 19:25:51.494497       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 19:25:51.498688       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 19:25:53.502310       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 19:25:53.507069       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 19:25:55.509672       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 19:25:55.513317       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 19:25:57.516511       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 19:25:57.521017       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 19:25:59.523931       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 19:25:59.527699       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 19:26:01.530887       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 19:26:01.536246       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 19:26:03.538868       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 19:26:03.543749       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 19:26:05.547292       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 19:26:05.551801       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 19:26:07.555655       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 19:26:07.560131       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-695107 -n addons-695107
helpers_test.go:270: (dbg) Run:  kubectl --context addons-695107 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:281: non-running pods: gcp-auth-certs-create-bkcmb gcp-auth-certs-patch-st9qp ingress-nginx-admission-create-rz9rh ingress-nginx-admission-patch-6bdmz registry-creds-764b6fb674-lglwq
helpers_test.go:283: ======> post-mortem[TestAddons/parallel/Headlamp]: describe non-running pods <======
helpers_test.go:286: (dbg) Run:  kubectl --context addons-695107 describe pod gcp-auth-certs-create-bkcmb gcp-auth-certs-patch-st9qp ingress-nginx-admission-create-rz9rh ingress-nginx-admission-patch-6bdmz registry-creds-764b6fb674-lglwq
helpers_test.go:286: (dbg) Non-zero exit: kubectl --context addons-695107 describe pod gcp-auth-certs-create-bkcmb gcp-auth-certs-patch-st9qp ingress-nginx-admission-create-rz9rh ingress-nginx-admission-patch-6bdmz registry-creds-764b6fb674-lglwq: exit status 1 (68.789167ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "gcp-auth-certs-create-bkcmb" not found
	Error from server (NotFound): pods "gcp-auth-certs-patch-st9qp" not found
	Error from server (NotFound): pods "ingress-nginx-admission-create-rz9rh" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-6bdmz" not found
	Error from server (NotFound): pods "registry-creds-764b6fb674-lglwq" not found

                                                
                                                
** /stderr **
helpers_test.go:288: kubectl --context addons-695107 describe pod gcp-auth-certs-create-bkcmb gcp-auth-certs-patch-st9qp ingress-nginx-admission-create-rz9rh ingress-nginx-admission-patch-6bdmz registry-creds-764b6fb674-lglwq: exit status 1
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-695107 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-695107 addons disable headlamp --alsologtostderr -v=1: exit status 11 (259.463512ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1217 19:26:08.953709  386427 out.go:360] Setting OutFile to fd 1 ...
	I1217 19:26:08.953983  386427 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 19:26:08.953996  386427 out.go:374] Setting ErrFile to fd 2...
	I1217 19:26:08.954004  386427 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 19:26:08.954261  386427 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22186-372245/.minikube/bin
	I1217 19:26:08.954576  386427 mustload.go:66] Loading cluster: addons-695107
	I1217 19:26:08.954977  386427 config.go:182] Loaded profile config "addons-695107": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 19:26:08.954999  386427 addons.go:622] checking whether the cluster is paused
	I1217 19:26:08.955140  386427 config.go:182] Loaded profile config "addons-695107": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 19:26:08.955160  386427 host.go:66] Checking if "addons-695107" exists ...
	I1217 19:26:08.955671  386427 cli_runner.go:164] Run: docker container inspect addons-695107 --format={{.State.Status}}
	I1217 19:26:08.974108  386427 ssh_runner.go:195] Run: systemctl --version
	I1217 19:26:08.974175  386427 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-695107
	I1217 19:26:08.992623  386427 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22186-372245/.minikube/machines/addons-695107/id_rsa Username:docker}
	I1217 19:26:09.094210  386427 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1217 19:26:09.094321  386427 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 19:26:09.124690  386427 cri.go:89] found id: "05e7c087fc88a388e9fce4a8fadcd7c6e045c449280b951b0a69fe971518c8e4"
	I1217 19:26:09.124723  386427 cri.go:89] found id: "030ee45fef3825f728fb878da790fd63c6e2d436f0bdee766e3b5c4313ba91b4"
	I1217 19:26:09.124728  386427 cri.go:89] found id: "e582a6b346e424adf2f6c23b450133f4ec35319edb9a095ef63a9da14924bc85"
	I1217 19:26:09.124732  386427 cri.go:89] found id: "6f1389fbed5a8165c3a7308b7768fbefbb05788ef8d898f075f95f6d5c909bde"
	I1217 19:26:09.124734  386427 cri.go:89] found id: "bb406a59b4704de349007327f30e38ffa01008f88e9504149a856dd758cb8314"
	I1217 19:26:09.124738  386427 cri.go:89] found id: "7927a0e1520a196318cf74495ff2fbd014eaec7890e7757b0c005f92944ba5fe"
	I1217 19:26:09.124741  386427 cri.go:89] found id: "4fd8c32f1f75b8dd6f3a5d4c557a48c965bfed2ee319e9ebc07b83a0498e9614"
	I1217 19:26:09.124744  386427 cri.go:89] found id: "3e0c0283ddfb5e25a2829243a99334aba7fddd2a8ed203b36520a310978711ad"
	I1217 19:26:09.124747  386427 cri.go:89] found id: "1309939d3b4dae1d9b8580e1652131608a79d12222165783d82fd3c6844da7d0"
	I1217 19:26:09.124757  386427 cri.go:89] found id: "8f0c2abe1917b2ff3fe742905d3cbd5e0734c50d00b37c3ae2d6bce65a81b1a4"
	I1217 19:26:09.124760  386427 cri.go:89] found id: "801db4b070e91430b722ceab6c3f6ad31c2b3fba0e4ec61f6575746703230db4"
	I1217 19:26:09.124763  386427 cri.go:89] found id: "c7eea19f4d49e38bd7e7f4cb234216d510d8104890af99fc48c47b7bea1c0bdd"
	I1217 19:26:09.124765  386427 cri.go:89] found id: "51a71566b557a3bb8ac4ee375ce62b941752fa12df3a062db96dfcdd7cf90c18"
	I1217 19:26:09.124768  386427 cri.go:89] found id: "a485e9f994ff95a2a7f3857ba3bac5871f37c7f68fe9a7511385fee343147b8b"
	I1217 19:26:09.124770  386427 cri.go:89] found id: "04f733eceac2431078e28d9b6aa0a99e8ae15495d70be998c595825b5d1bf4f8"
	I1217 19:26:09.124782  386427 cri.go:89] found id: "c3f541802ca322bdfefe59f58465e0b5fc47df46f565bbf169fdf155b6520813"
	I1217 19:26:09.124788  386427 cri.go:89] found id: "e3aca076801c71c61c7d166207a81c454eca7b4579247b6da815893233243960"
	I1217 19:26:09.124795  386427 cri.go:89] found id: "f32dab99d943eec56bf9918ed2f6b53e96fd877cfbbf5192cf7d857f1b776f8e"
	I1217 19:26:09.124798  386427 cri.go:89] found id: "b68b1b351d2b0d7d4628fdbe0a6689c4e3150e140e9149ec00e8886c21c85388"
	I1217 19:26:09.124803  386427 cri.go:89] found id: "bc8813162646db6787344c15bb78bf1f1a23063d72326a728b0a42dafc7c4d56"
	I1217 19:26:09.124806  386427 cri.go:89] found id: "bea3125cf2914bd997ad7c9b382bc666af7c3ef97d39311b120cecf6bfd19b22"
	I1217 19:26:09.124810  386427 cri.go:89] found id: "5875440c2f308ff9ae46bdeb21b7960b61f51fff5f745adf6f9deb63f35cfb16"
	I1217 19:26:09.124815  386427 cri.go:89] found id: "87468d7032ea669744a3be9490a79472140a58976b8a3c756b65a43dbda2d50e"
	I1217 19:26:09.124819  386427 cri.go:89] found id: "fd7cf6d64d69e77f0f93c54b2f5c32210f59f02ec07dbd9708e6d7d40d2b4e33"
	I1217 19:26:09.124823  386427 cri.go:89] found id: ""
	I1217 19:26:09.124877  386427 ssh_runner.go:195] Run: sudo runc list -f json
	I1217 19:26:09.139688  386427 out.go:203] 
	W1217 19:26:09.140833  386427 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T19:26:09Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T19:26:09Z" level=error msg="open /run/runc: no such file or directory"
	
	W1217 19:26:09.140859  386427 out.go:285] * 
	* 
	W1217 19:26:09.144776  386427 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1217 19:26:09.146299  386427 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable headlamp addon: args "out/minikube-linux-amd64 -p addons-695107 addons disable headlamp --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Headlamp (2.64s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.3s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:842: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:353: "cloud-spanner-emulator-5bdddb765-kzhtq" [dce77872-ec28-4525-b6d1-d5d5cea394bf] Running
addons_test.go:842: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.003769798s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-695107 addons disable cloud-spanner --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-695107 addons disable cloud-spanner --alsologtostderr -v=1: exit status 11 (291.73109ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1217 19:26:19.498619  387842 out.go:360] Setting OutFile to fd 1 ...
	I1217 19:26:19.498909  387842 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 19:26:19.498919  387842 out.go:374] Setting ErrFile to fd 2...
	I1217 19:26:19.498924  387842 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 19:26:19.499231  387842 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22186-372245/.minikube/bin
	I1217 19:26:19.499554  387842 mustload.go:66] Loading cluster: addons-695107
	I1217 19:26:19.500032  387842 config.go:182] Loaded profile config "addons-695107": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 19:26:19.500053  387842 addons.go:622] checking whether the cluster is paused
	I1217 19:26:19.500209  387842 config.go:182] Loaded profile config "addons-695107": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 19:26:19.500228  387842 host.go:66] Checking if "addons-695107" exists ...
	I1217 19:26:19.500758  387842 cli_runner.go:164] Run: docker container inspect addons-695107 --format={{.State.Status}}
	I1217 19:26:19.521515  387842 ssh_runner.go:195] Run: systemctl --version
	I1217 19:26:19.521563  387842 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-695107
	I1217 19:26:19.539718  387842 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22186-372245/.minikube/machines/addons-695107/id_rsa Username:docker}
	I1217 19:26:19.649803  387842 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1217 19:26:19.649923  387842 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 19:26:19.683236  387842 cri.go:89] found id: "05e7c087fc88a388e9fce4a8fadcd7c6e045c449280b951b0a69fe971518c8e4"
	I1217 19:26:19.683259  387842 cri.go:89] found id: "030ee45fef3825f728fb878da790fd63c6e2d436f0bdee766e3b5c4313ba91b4"
	I1217 19:26:19.683263  387842 cri.go:89] found id: "e582a6b346e424adf2f6c23b450133f4ec35319edb9a095ef63a9da14924bc85"
	I1217 19:26:19.683267  387842 cri.go:89] found id: "6f1389fbed5a8165c3a7308b7768fbefbb05788ef8d898f075f95f6d5c909bde"
	I1217 19:26:19.683276  387842 cri.go:89] found id: "bb406a59b4704de349007327f30e38ffa01008f88e9504149a856dd758cb8314"
	I1217 19:26:19.683280  387842 cri.go:89] found id: "7927a0e1520a196318cf74495ff2fbd014eaec7890e7757b0c005f92944ba5fe"
	I1217 19:26:19.683282  387842 cri.go:89] found id: "4fd8c32f1f75b8dd6f3a5d4c557a48c965bfed2ee319e9ebc07b83a0498e9614"
	I1217 19:26:19.683285  387842 cri.go:89] found id: "3e0c0283ddfb5e25a2829243a99334aba7fddd2a8ed203b36520a310978711ad"
	I1217 19:26:19.683287  387842 cri.go:89] found id: "1309939d3b4dae1d9b8580e1652131608a79d12222165783d82fd3c6844da7d0"
	I1217 19:26:19.683300  387842 cri.go:89] found id: "8f0c2abe1917b2ff3fe742905d3cbd5e0734c50d00b37c3ae2d6bce65a81b1a4"
	I1217 19:26:19.683303  387842 cri.go:89] found id: "801db4b070e91430b722ceab6c3f6ad31c2b3fba0e4ec61f6575746703230db4"
	I1217 19:26:19.683305  387842 cri.go:89] found id: "c7eea19f4d49e38bd7e7f4cb234216d510d8104890af99fc48c47b7bea1c0bdd"
	I1217 19:26:19.683308  387842 cri.go:89] found id: "51a71566b557a3bb8ac4ee375ce62b941752fa12df3a062db96dfcdd7cf90c18"
	I1217 19:26:19.683311  387842 cri.go:89] found id: "a485e9f994ff95a2a7f3857ba3bac5871f37c7f68fe9a7511385fee343147b8b"
	I1217 19:26:19.683314  387842 cri.go:89] found id: "04f733eceac2431078e28d9b6aa0a99e8ae15495d70be998c595825b5d1bf4f8"
	I1217 19:26:19.683318  387842 cri.go:89] found id: "c3f541802ca322bdfefe59f58465e0b5fc47df46f565bbf169fdf155b6520813"
	I1217 19:26:19.683321  387842 cri.go:89] found id: "e3aca076801c71c61c7d166207a81c454eca7b4579247b6da815893233243960"
	I1217 19:26:19.683324  387842 cri.go:89] found id: "f32dab99d943eec56bf9918ed2f6b53e96fd877cfbbf5192cf7d857f1b776f8e"
	I1217 19:26:19.683326  387842 cri.go:89] found id: "b68b1b351d2b0d7d4628fdbe0a6689c4e3150e140e9149ec00e8886c21c85388"
	I1217 19:26:19.683329  387842 cri.go:89] found id: "bc8813162646db6787344c15bb78bf1f1a23063d72326a728b0a42dafc7c4d56"
	I1217 19:26:19.683331  387842 cri.go:89] found id: "bea3125cf2914bd997ad7c9b382bc666af7c3ef97d39311b120cecf6bfd19b22"
	I1217 19:26:19.683334  387842 cri.go:89] found id: "5875440c2f308ff9ae46bdeb21b7960b61f51fff5f745adf6f9deb63f35cfb16"
	I1217 19:26:19.683336  387842 cri.go:89] found id: "87468d7032ea669744a3be9490a79472140a58976b8a3c756b65a43dbda2d50e"
	I1217 19:26:19.683339  387842 cri.go:89] found id: "fd7cf6d64d69e77f0f93c54b2f5c32210f59f02ec07dbd9708e6d7d40d2b4e33"
	I1217 19:26:19.683341  387842 cri.go:89] found id: ""
	I1217 19:26:19.683379  387842 ssh_runner.go:195] Run: sudo runc list -f json
	I1217 19:26:19.701826  387842 out.go:203] 
	W1217 19:26:19.703608  387842 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T19:26:19Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T19:26:19Z" level=error msg="open /run/runc: no such file or directory"
	
	W1217 19:26:19.703637  387842 out.go:285] * 
	* 
	W1217 19:26:19.709188  387842 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1217 19:26:19.711127  387842 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable cloud-spanner addon: args "out/minikube-linux-amd64 -p addons-695107 addons disable cloud-spanner --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CloudSpanner (5.30s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (9.13s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:951: (dbg) Run:  kubectl --context addons-695107 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:957: (dbg) Run:  kubectl --context addons-695107 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:961: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-695107 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-695107 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-695107 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-695107 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-695107 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:964: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:353: "test-local-path" [d28a1b82-e00a-41a5-95f7-48220ac9a24b] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "test-local-path" [d28a1b82-e00a-41a5-95f7-48220ac9a24b] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:353: "test-local-path" [d28a1b82-e00a-41a5-95f7-48220ac9a24b] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:964: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.002808537s
addons_test.go:969: (dbg) Run:  kubectl --context addons-695107 get pvc test-pvc -o=json
addons_test.go:978: (dbg) Run:  out/minikube-linux-amd64 -p addons-695107 ssh "cat /opt/local-path-provisioner/pvc-53e85d6e-9bfa-403c-aeb8-846b9e87923f_default_test-pvc/file1"
addons_test.go:990: (dbg) Run:  kubectl --context addons-695107 delete pod test-local-path
addons_test.go:994: (dbg) Run:  kubectl --context addons-695107 delete pvc test-pvc
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-695107 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-695107 addons disable storage-provisioner-rancher --alsologtostderr -v=1: exit status 11 (259.801732ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1217 19:26:27.510956  388773 out.go:360] Setting OutFile to fd 1 ...
	I1217 19:26:27.511098  388773 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 19:26:27.511109  388773 out.go:374] Setting ErrFile to fd 2...
	I1217 19:26:27.511115  388773 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 19:26:27.511346  388773 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22186-372245/.minikube/bin
	I1217 19:26:27.511662  388773 mustload.go:66] Loading cluster: addons-695107
	I1217 19:26:27.512018  388773 config.go:182] Loaded profile config "addons-695107": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 19:26:27.512038  388773 addons.go:622] checking whether the cluster is paused
	I1217 19:26:27.512151  388773 config.go:182] Loaded profile config "addons-695107": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 19:26:27.512168  388773 host.go:66] Checking if "addons-695107" exists ...
	I1217 19:26:27.512634  388773 cli_runner.go:164] Run: docker container inspect addons-695107 --format={{.State.Status}}
	I1217 19:26:27.531156  388773 ssh_runner.go:195] Run: systemctl --version
	I1217 19:26:27.531239  388773 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-695107
	I1217 19:26:27.549660  388773 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22186-372245/.minikube/machines/addons-695107/id_rsa Username:docker}
	I1217 19:26:27.652042  388773 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1217 19:26:27.652141  388773 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 19:26:27.682380  388773 cri.go:89] found id: "e83d1036078086aaca80c341c18864a4fa25b95af7b2bca016c4f75ad06315fa"
	I1217 19:26:27.682406  388773 cri.go:89] found id: "05e7c087fc88a388e9fce4a8fadcd7c6e045c449280b951b0a69fe971518c8e4"
	I1217 19:26:27.682410  388773 cri.go:89] found id: "030ee45fef3825f728fb878da790fd63c6e2d436f0bdee766e3b5c4313ba91b4"
	I1217 19:26:27.682414  388773 cri.go:89] found id: "e582a6b346e424adf2f6c23b450133f4ec35319edb9a095ef63a9da14924bc85"
	I1217 19:26:27.682417  388773 cri.go:89] found id: "6f1389fbed5a8165c3a7308b7768fbefbb05788ef8d898f075f95f6d5c909bde"
	I1217 19:26:27.682420  388773 cri.go:89] found id: "bb406a59b4704de349007327f30e38ffa01008f88e9504149a856dd758cb8314"
	I1217 19:26:27.682423  388773 cri.go:89] found id: "7927a0e1520a196318cf74495ff2fbd014eaec7890e7757b0c005f92944ba5fe"
	I1217 19:26:27.682425  388773 cri.go:89] found id: "4fd8c32f1f75b8dd6f3a5d4c557a48c965bfed2ee319e9ebc07b83a0498e9614"
	I1217 19:26:27.682428  388773 cri.go:89] found id: "3e0c0283ddfb5e25a2829243a99334aba7fddd2a8ed203b36520a310978711ad"
	I1217 19:26:27.682433  388773 cri.go:89] found id: "1309939d3b4dae1d9b8580e1652131608a79d12222165783d82fd3c6844da7d0"
	I1217 19:26:27.682436  388773 cri.go:89] found id: "8f0c2abe1917b2ff3fe742905d3cbd5e0734c50d00b37c3ae2d6bce65a81b1a4"
	I1217 19:26:27.682439  388773 cri.go:89] found id: "801db4b070e91430b722ceab6c3f6ad31c2b3fba0e4ec61f6575746703230db4"
	I1217 19:26:27.682441  388773 cri.go:89] found id: "c7eea19f4d49e38bd7e7f4cb234216d510d8104890af99fc48c47b7bea1c0bdd"
	I1217 19:26:27.682444  388773 cri.go:89] found id: "51a71566b557a3bb8ac4ee375ce62b941752fa12df3a062db96dfcdd7cf90c18"
	I1217 19:26:27.682449  388773 cri.go:89] found id: "a485e9f994ff95a2a7f3857ba3bac5871f37c7f68fe9a7511385fee343147b8b"
	I1217 19:26:27.682454  388773 cri.go:89] found id: "04f733eceac2431078e28d9b6aa0a99e8ae15495d70be998c595825b5d1bf4f8"
	I1217 19:26:27.682457  388773 cri.go:89] found id: "c3f541802ca322bdfefe59f58465e0b5fc47df46f565bbf169fdf155b6520813"
	I1217 19:26:27.682461  388773 cri.go:89] found id: "e3aca076801c71c61c7d166207a81c454eca7b4579247b6da815893233243960"
	I1217 19:26:27.682463  388773 cri.go:89] found id: "f32dab99d943eec56bf9918ed2f6b53e96fd877cfbbf5192cf7d857f1b776f8e"
	I1217 19:26:27.682466  388773 cri.go:89] found id: "b68b1b351d2b0d7d4628fdbe0a6689c4e3150e140e9149ec00e8886c21c85388"
	I1217 19:26:27.682472  388773 cri.go:89] found id: "bc8813162646db6787344c15bb78bf1f1a23063d72326a728b0a42dafc7c4d56"
	I1217 19:26:27.682474  388773 cri.go:89] found id: "bea3125cf2914bd997ad7c9b382bc666af7c3ef97d39311b120cecf6bfd19b22"
	I1217 19:26:27.682477  388773 cri.go:89] found id: "5875440c2f308ff9ae46bdeb21b7960b61f51fff5f745adf6f9deb63f35cfb16"
	I1217 19:26:27.682479  388773 cri.go:89] found id: "87468d7032ea669744a3be9490a79472140a58976b8a3c756b65a43dbda2d50e"
	I1217 19:26:27.682482  388773 cri.go:89] found id: "fd7cf6d64d69e77f0f93c54b2f5c32210f59f02ec07dbd9708e6d7d40d2b4e33"
	I1217 19:26:27.682485  388773 cri.go:89] found id: ""
	I1217 19:26:27.682523  388773 ssh_runner.go:195] Run: sudo runc list -f json
	I1217 19:26:27.699017  388773 out.go:203] 
	W1217 19:26:27.700636  388773 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T19:26:27Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T19:26:27Z" level=error msg="open /run/runc: no such file or directory"
	
	W1217 19:26:27.700658  388773 out.go:285] * 
	* 
	W1217 19:26:27.705326  388773 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1217 19:26:27.707232  388773 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable storage-provisioner-rancher addon: args "out/minikube-linux-amd64 -p addons-695107 addons disable storage-provisioner-rancher --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/LocalPath (9.13s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.28s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1027: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:353: "nvidia-device-plugin-daemonset-5hdv7" [2bc6b0b1-2270-4abe-b5d5-2dc24f542121] Running
addons_test.go:1027: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.004042367s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-695107 addons disable nvidia-device-plugin --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-695107 addons disable nvidia-device-plugin --alsologtostderr -v=1: exit status 11 (269.436284ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1217 19:26:11.579840  386522 out.go:360] Setting OutFile to fd 1 ...
	I1217 19:26:11.579944  386522 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 19:26:11.579951  386522 out.go:374] Setting ErrFile to fd 2...
	I1217 19:26:11.579955  386522 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 19:26:11.580251  386522 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22186-372245/.minikube/bin
	I1217 19:26:11.580552  386522 mustload.go:66] Loading cluster: addons-695107
	I1217 19:26:11.580982  386522 config.go:182] Loaded profile config "addons-695107": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 19:26:11.581000  386522 addons.go:622] checking whether the cluster is paused
	I1217 19:26:11.581098  386522 config.go:182] Loaded profile config "addons-695107": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 19:26:11.581111  386522 host.go:66] Checking if "addons-695107" exists ...
	I1217 19:26:11.581478  386522 cli_runner.go:164] Run: docker container inspect addons-695107 --format={{.State.Status}}
	I1217 19:26:11.602857  386522 ssh_runner.go:195] Run: systemctl --version
	I1217 19:26:11.602936  386522 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-695107
	I1217 19:26:11.623599  386522 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22186-372245/.minikube/machines/addons-695107/id_rsa Username:docker}
	I1217 19:26:11.727979  386522 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1217 19:26:11.728098  386522 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 19:26:11.758593  386522 cri.go:89] found id: "05e7c087fc88a388e9fce4a8fadcd7c6e045c449280b951b0a69fe971518c8e4"
	I1217 19:26:11.758625  386522 cri.go:89] found id: "030ee45fef3825f728fb878da790fd63c6e2d436f0bdee766e3b5c4313ba91b4"
	I1217 19:26:11.758629  386522 cri.go:89] found id: "e582a6b346e424adf2f6c23b450133f4ec35319edb9a095ef63a9da14924bc85"
	I1217 19:26:11.758632  386522 cri.go:89] found id: "6f1389fbed5a8165c3a7308b7768fbefbb05788ef8d898f075f95f6d5c909bde"
	I1217 19:26:11.758636  386522 cri.go:89] found id: "bb406a59b4704de349007327f30e38ffa01008f88e9504149a856dd758cb8314"
	I1217 19:26:11.758640  386522 cri.go:89] found id: "7927a0e1520a196318cf74495ff2fbd014eaec7890e7757b0c005f92944ba5fe"
	I1217 19:26:11.758643  386522 cri.go:89] found id: "4fd8c32f1f75b8dd6f3a5d4c557a48c965bfed2ee319e9ebc07b83a0498e9614"
	I1217 19:26:11.758645  386522 cri.go:89] found id: "3e0c0283ddfb5e25a2829243a99334aba7fddd2a8ed203b36520a310978711ad"
	I1217 19:26:11.758648  386522 cri.go:89] found id: "1309939d3b4dae1d9b8580e1652131608a79d12222165783d82fd3c6844da7d0"
	I1217 19:26:11.758657  386522 cri.go:89] found id: "8f0c2abe1917b2ff3fe742905d3cbd5e0734c50d00b37c3ae2d6bce65a81b1a4"
	I1217 19:26:11.758661  386522 cri.go:89] found id: "801db4b070e91430b722ceab6c3f6ad31c2b3fba0e4ec61f6575746703230db4"
	I1217 19:26:11.758663  386522 cri.go:89] found id: "c7eea19f4d49e38bd7e7f4cb234216d510d8104890af99fc48c47b7bea1c0bdd"
	I1217 19:26:11.758666  386522 cri.go:89] found id: "51a71566b557a3bb8ac4ee375ce62b941752fa12df3a062db96dfcdd7cf90c18"
	I1217 19:26:11.758669  386522 cri.go:89] found id: "a485e9f994ff95a2a7f3857ba3bac5871f37c7f68fe9a7511385fee343147b8b"
	I1217 19:26:11.758672  386522 cri.go:89] found id: "04f733eceac2431078e28d9b6aa0a99e8ae15495d70be998c595825b5d1bf4f8"
	I1217 19:26:11.758683  386522 cri.go:89] found id: "c3f541802ca322bdfefe59f58465e0b5fc47df46f565bbf169fdf155b6520813"
	I1217 19:26:11.758691  386522 cri.go:89] found id: "e3aca076801c71c61c7d166207a81c454eca7b4579247b6da815893233243960"
	I1217 19:26:11.758695  386522 cri.go:89] found id: "f32dab99d943eec56bf9918ed2f6b53e96fd877cfbbf5192cf7d857f1b776f8e"
	I1217 19:26:11.758698  386522 cri.go:89] found id: "b68b1b351d2b0d7d4628fdbe0a6689c4e3150e140e9149ec00e8886c21c85388"
	I1217 19:26:11.758700  386522 cri.go:89] found id: "bc8813162646db6787344c15bb78bf1f1a23063d72326a728b0a42dafc7c4d56"
	I1217 19:26:11.758706  386522 cri.go:89] found id: "bea3125cf2914bd997ad7c9b382bc666af7c3ef97d39311b120cecf6bfd19b22"
	I1217 19:26:11.758708  386522 cri.go:89] found id: "5875440c2f308ff9ae46bdeb21b7960b61f51fff5f745adf6f9deb63f35cfb16"
	I1217 19:26:11.758711  386522 cri.go:89] found id: "87468d7032ea669744a3be9490a79472140a58976b8a3c756b65a43dbda2d50e"
	I1217 19:26:11.758713  386522 cri.go:89] found id: "fd7cf6d64d69e77f0f93c54b2f5c32210f59f02ec07dbd9708e6d7d40d2b4e33"
	I1217 19:26:11.758716  386522 cri.go:89] found id: ""
	I1217 19:26:11.758765  386522 ssh_runner.go:195] Run: sudo runc list -f json
	I1217 19:26:11.773682  386522 out.go:203] 
	W1217 19:26:11.774932  386522 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T19:26:11Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T19:26:11Z" level=error msg="open /run/runc: no such file or directory"
	
	W1217 19:26:11.774950  386522 out.go:285] * 
	* 
	W1217 19:26:11.778826  386522 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1217 19:26:11.780141  386522 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable nvidia-device-plugin addon: args "out/minikube-linux-amd64 -p addons-695107 addons disable nvidia-device-plugin --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/NvidiaDevicePlugin (5.28s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (5.26s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1049: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:353: "yakd-dashboard-6654c87f9b-mcjdv" [5fc128c0-0181-498d-b723-38d9672efe86] Running
addons_test.go:1049: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.003716767s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-695107 addons disable yakd --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-695107 addons disable yakd --alsologtostderr -v=1: exit status 11 (258.970949ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1217 19:26:25.224691  388517 out.go:360] Setting OutFile to fd 1 ...
	I1217 19:26:25.224985  388517 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 19:26:25.224996  388517 out.go:374] Setting ErrFile to fd 2...
	I1217 19:26:25.225000  388517 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 19:26:25.225298  388517 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22186-372245/.minikube/bin
	I1217 19:26:25.225605  388517 mustload.go:66] Loading cluster: addons-695107
	I1217 19:26:25.225977  388517 config.go:182] Loaded profile config "addons-695107": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 19:26:25.225994  388517 addons.go:622] checking whether the cluster is paused
	I1217 19:26:25.226104  388517 config.go:182] Loaded profile config "addons-695107": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 19:26:25.226122  388517 host.go:66] Checking if "addons-695107" exists ...
	I1217 19:26:25.226565  388517 cli_runner.go:164] Run: docker container inspect addons-695107 --format={{.State.Status}}
	I1217 19:26:25.245172  388517 ssh_runner.go:195] Run: systemctl --version
	I1217 19:26:25.245230  388517 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-695107
	I1217 19:26:25.263782  388517 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22186-372245/.minikube/machines/addons-695107/id_rsa Username:docker}
	I1217 19:26:25.365881  388517 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1217 19:26:25.365988  388517 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 19:26:25.395935  388517 cri.go:89] found id: "e83d1036078086aaca80c341c18864a4fa25b95af7b2bca016c4f75ad06315fa"
	I1217 19:26:25.395969  388517 cri.go:89] found id: "05e7c087fc88a388e9fce4a8fadcd7c6e045c449280b951b0a69fe971518c8e4"
	I1217 19:26:25.395979  388517 cri.go:89] found id: "030ee45fef3825f728fb878da790fd63c6e2d436f0bdee766e3b5c4313ba91b4"
	I1217 19:26:25.395982  388517 cri.go:89] found id: "e582a6b346e424adf2f6c23b450133f4ec35319edb9a095ef63a9da14924bc85"
	I1217 19:26:25.395985  388517 cri.go:89] found id: "6f1389fbed5a8165c3a7308b7768fbefbb05788ef8d898f075f95f6d5c909bde"
	I1217 19:26:25.395989  388517 cri.go:89] found id: "bb406a59b4704de349007327f30e38ffa01008f88e9504149a856dd758cb8314"
	I1217 19:26:25.395991  388517 cri.go:89] found id: "7927a0e1520a196318cf74495ff2fbd014eaec7890e7757b0c005f92944ba5fe"
	I1217 19:26:25.395994  388517 cri.go:89] found id: "4fd8c32f1f75b8dd6f3a5d4c557a48c965bfed2ee319e9ebc07b83a0498e9614"
	I1217 19:26:25.395997  388517 cri.go:89] found id: "3e0c0283ddfb5e25a2829243a99334aba7fddd2a8ed203b36520a310978711ad"
	I1217 19:26:25.396007  388517 cri.go:89] found id: "1309939d3b4dae1d9b8580e1652131608a79d12222165783d82fd3c6844da7d0"
	I1217 19:26:25.396010  388517 cri.go:89] found id: "8f0c2abe1917b2ff3fe742905d3cbd5e0734c50d00b37c3ae2d6bce65a81b1a4"
	I1217 19:26:25.396012  388517 cri.go:89] found id: "801db4b070e91430b722ceab6c3f6ad31c2b3fba0e4ec61f6575746703230db4"
	I1217 19:26:25.396015  388517 cri.go:89] found id: "c7eea19f4d49e38bd7e7f4cb234216d510d8104890af99fc48c47b7bea1c0bdd"
	I1217 19:26:25.396018  388517 cri.go:89] found id: "51a71566b557a3bb8ac4ee375ce62b941752fa12df3a062db96dfcdd7cf90c18"
	I1217 19:26:25.396021  388517 cri.go:89] found id: "a485e9f994ff95a2a7f3857ba3bac5871f37c7f68fe9a7511385fee343147b8b"
	I1217 19:26:25.396026  388517 cri.go:89] found id: "04f733eceac2431078e28d9b6aa0a99e8ae15495d70be998c595825b5d1bf4f8"
	I1217 19:26:25.396029  388517 cri.go:89] found id: "c3f541802ca322bdfefe59f58465e0b5fc47df46f565bbf169fdf155b6520813"
	I1217 19:26:25.396033  388517 cri.go:89] found id: "e3aca076801c71c61c7d166207a81c454eca7b4579247b6da815893233243960"
	I1217 19:26:25.396035  388517 cri.go:89] found id: "f32dab99d943eec56bf9918ed2f6b53e96fd877cfbbf5192cf7d857f1b776f8e"
	I1217 19:26:25.396038  388517 cri.go:89] found id: "b68b1b351d2b0d7d4628fdbe0a6689c4e3150e140e9149ec00e8886c21c85388"
	I1217 19:26:25.396041  388517 cri.go:89] found id: "bc8813162646db6787344c15bb78bf1f1a23063d72326a728b0a42dafc7c4d56"
	I1217 19:26:25.396043  388517 cri.go:89] found id: "bea3125cf2914bd997ad7c9b382bc666af7c3ef97d39311b120cecf6bfd19b22"
	I1217 19:26:25.396046  388517 cri.go:89] found id: "5875440c2f308ff9ae46bdeb21b7960b61f51fff5f745adf6f9deb63f35cfb16"
	I1217 19:26:25.396049  388517 cri.go:89] found id: "87468d7032ea669744a3be9490a79472140a58976b8a3c756b65a43dbda2d50e"
	I1217 19:26:25.396052  388517 cri.go:89] found id: "fd7cf6d64d69e77f0f93c54b2f5c32210f59f02ec07dbd9708e6d7d40d2b4e33"
	I1217 19:26:25.396055  388517 cri.go:89] found id: ""
	I1217 19:26:25.396144  388517 ssh_runner.go:195] Run: sudo runc list -f json
	I1217 19:26:25.410857  388517 out.go:203] 
	W1217 19:26:25.412123  388517 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T19:26:25Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T19:26:25Z" level=error msg="open /run/runc: no such file or directory"
	
	W1217 19:26:25.412156  388517 out.go:285] * 
	* 
	W1217 19:26:25.416227  388517 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1217 19:26:25.417782  388517 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable yakd addon: args "out/minikube-linux-amd64 -p addons-695107 addons disable yakd --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Yakd (5.26s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (6.27s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1040: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: waiting 6m0s for pods matching "name=amd-gpu-device-plugin" in namespace "kube-system" ...
helpers_test.go:353: "amd-gpu-device-plugin-xl62h" [e36b51fd-d2b7-4d84-92fd-3f234d68f8f8] Running
addons_test.go:1040: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: name=amd-gpu-device-plugin healthy within 6.003965334s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-695107 addons disable amd-gpu-device-plugin --alsologtostderr -v=1
addons_test.go:1055: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-695107 addons disable amd-gpu-device-plugin --alsologtostderr -v=1: exit status 11 (264.182156ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1217 19:26:18.378386  387710 out.go:360] Setting OutFile to fd 1 ...
	I1217 19:26:18.378513  387710 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 19:26:18.378521  387710 out.go:374] Setting ErrFile to fd 2...
	I1217 19:26:18.378526  387710 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 19:26:18.378730  387710 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22186-372245/.minikube/bin
	I1217 19:26:18.379100  387710 mustload.go:66] Loading cluster: addons-695107
	I1217 19:26:18.379489  387710 config.go:182] Loaded profile config "addons-695107": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 19:26:18.379508  387710 addons.go:622] checking whether the cluster is paused
	I1217 19:26:18.379607  387710 config.go:182] Loaded profile config "addons-695107": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 19:26:18.379622  387710 host.go:66] Checking if "addons-695107" exists ...
	I1217 19:26:18.380129  387710 cli_runner.go:164] Run: docker container inspect addons-695107 --format={{.State.Status}}
	I1217 19:26:18.399452  387710 ssh_runner.go:195] Run: systemctl --version
	I1217 19:26:18.399534  387710 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-695107
	I1217 19:26:18.420369  387710 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/22186-372245/.minikube/machines/addons-695107/id_rsa Username:docker}
	I1217 19:26:18.523913  387710 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1217 19:26:18.523988  387710 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 19:26:18.554781  387710 cri.go:89] found id: "05e7c087fc88a388e9fce4a8fadcd7c6e045c449280b951b0a69fe971518c8e4"
	I1217 19:26:18.554813  387710 cri.go:89] found id: "030ee45fef3825f728fb878da790fd63c6e2d436f0bdee766e3b5c4313ba91b4"
	I1217 19:26:18.554820  387710 cri.go:89] found id: "e582a6b346e424adf2f6c23b450133f4ec35319edb9a095ef63a9da14924bc85"
	I1217 19:26:18.554825  387710 cri.go:89] found id: "6f1389fbed5a8165c3a7308b7768fbefbb05788ef8d898f075f95f6d5c909bde"
	I1217 19:26:18.554830  387710 cri.go:89] found id: "bb406a59b4704de349007327f30e38ffa01008f88e9504149a856dd758cb8314"
	I1217 19:26:18.554835  387710 cri.go:89] found id: "7927a0e1520a196318cf74495ff2fbd014eaec7890e7757b0c005f92944ba5fe"
	I1217 19:26:18.554839  387710 cri.go:89] found id: "4fd8c32f1f75b8dd6f3a5d4c557a48c965bfed2ee319e9ebc07b83a0498e9614"
	I1217 19:26:18.554844  387710 cri.go:89] found id: "3e0c0283ddfb5e25a2829243a99334aba7fddd2a8ed203b36520a310978711ad"
	I1217 19:26:18.554849  387710 cri.go:89] found id: "1309939d3b4dae1d9b8580e1652131608a79d12222165783d82fd3c6844da7d0"
	I1217 19:26:18.554858  387710 cri.go:89] found id: "8f0c2abe1917b2ff3fe742905d3cbd5e0734c50d00b37c3ae2d6bce65a81b1a4"
	I1217 19:26:18.554866  387710 cri.go:89] found id: "801db4b070e91430b722ceab6c3f6ad31c2b3fba0e4ec61f6575746703230db4"
	I1217 19:26:18.554870  387710 cri.go:89] found id: "c7eea19f4d49e38bd7e7f4cb234216d510d8104890af99fc48c47b7bea1c0bdd"
	I1217 19:26:18.554873  387710 cri.go:89] found id: "51a71566b557a3bb8ac4ee375ce62b941752fa12df3a062db96dfcdd7cf90c18"
	I1217 19:26:18.554876  387710 cri.go:89] found id: "a485e9f994ff95a2a7f3857ba3bac5871f37c7f68fe9a7511385fee343147b8b"
	I1217 19:26:18.554879  387710 cri.go:89] found id: "04f733eceac2431078e28d9b6aa0a99e8ae15495d70be998c595825b5d1bf4f8"
	I1217 19:26:18.554887  387710 cri.go:89] found id: "c3f541802ca322bdfefe59f58465e0b5fc47df46f565bbf169fdf155b6520813"
	I1217 19:26:18.554893  387710 cri.go:89] found id: "e3aca076801c71c61c7d166207a81c454eca7b4579247b6da815893233243960"
	I1217 19:26:18.554898  387710 cri.go:89] found id: "f32dab99d943eec56bf9918ed2f6b53e96fd877cfbbf5192cf7d857f1b776f8e"
	I1217 19:26:18.554901  387710 cri.go:89] found id: "b68b1b351d2b0d7d4628fdbe0a6689c4e3150e140e9149ec00e8886c21c85388"
	I1217 19:26:18.554904  387710 cri.go:89] found id: "bc8813162646db6787344c15bb78bf1f1a23063d72326a728b0a42dafc7c4d56"
	I1217 19:26:18.554908  387710 cri.go:89] found id: "bea3125cf2914bd997ad7c9b382bc666af7c3ef97d39311b120cecf6bfd19b22"
	I1217 19:26:18.554911  387710 cri.go:89] found id: "5875440c2f308ff9ae46bdeb21b7960b61f51fff5f745adf6f9deb63f35cfb16"
	I1217 19:26:18.554914  387710 cri.go:89] found id: "87468d7032ea669744a3be9490a79472140a58976b8a3c756b65a43dbda2d50e"
	I1217 19:26:18.554929  387710 cri.go:89] found id: "fd7cf6d64d69e77f0f93c54b2f5c32210f59f02ec07dbd9708e6d7d40d2b4e33"
	I1217 19:26:18.554934  387710 cri.go:89] found id: ""
	I1217 19:26:18.554976  387710 ssh_runner.go:195] Run: sudo runc list -f json
	I1217 19:26:18.570580  387710 out.go:203] 
	W1217 19:26:18.572329  387710 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T19:26:18Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T19:26:18Z" level=error msg="open /run/runc: no such file or directory"
	
	W1217 19:26:18.572361  387710 out.go:285] * 
	* 
	W1217 19:26:18.576888  387710 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_d91df5e23a6c7812cf3b3b0d72c142ff742a541e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_d91df5e23a6c7812cf3b3b0d72c142ff742a541e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1217 19:26:18.578467  387710 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1057: failed to disable amd-gpu-device-plugin addon: args "out/minikube-linux-amd64 -p addons-695107 addons disable amd-gpu-device-plugin --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/AmdGpuDevicePlugin (6.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (3.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-amd64 -p functional-676725 image load --daemon kicbase/echo-server:functional-676725 --alsologtostderr
functional_test.go:370: (dbg) Done: out/minikube-linux-amd64 -p functional-676725 image load --daemon kicbase/echo-server:functional-676725 --alsologtostderr: (1.41961754s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-676725 image ls
functional_test.go:466: (dbg) Done: out/minikube-linux-amd64 -p functional-676725 image ls: (2.311898932s)
functional_test.go:461: expected "kicbase/echo-server:functional-676725" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (3.73s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (2.32s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-958146 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p json-output-958146 --output=json --user=testUser: exit status 80 (2.32031729s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"6bc86050-d4e6-4fa8-a4bc-a982af441b51","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Pausing node json-output-958146 ...","name":"Pausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"ef805821-1223-47b1-8a72-0a5adfb82111","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list running: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2025-12-17T19:43:47Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_PAUSE","url":""}}
	{"specversion":"1.0","id":"eb625cec-193c-4786-a8a1-bad76b4f710b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-amd64 pause -p json-output-958146 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/pause/Command (2.32s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (1.8s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-958146 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-amd64 unpause -p json-output-958146 --output=json --user=testUser: exit status 80 (1.800497445s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"0bc8514b-efd7-4654-8fa1-875452addc12","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Unpausing node json-output-958146 ...","name":"Unpausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"35c24713-76ff-43f5-81b6-88d7376a8bed","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list paused: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2025-12-17T19:43:49Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_UNPAUSE","url":""}}
	{"specversion":"1.0","id":"57c02ce1-0f70-4077-a3f5-9e2693c3ab38","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_unpause_85c908ac827001a7ced33feb0caf7da086d17584_0.log                 │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-amd64 unpause -p json-output-958146 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/unpause/Command (1.80s)

                                                
                                    
x
+
TestPause/serial/Pause (6.43s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-318455 --alsologtostderr -v=5
pause_test.go:110: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p pause-318455 --alsologtostderr -v=5: exit status 80 (2.548658082s)

                                                
                                                
-- stdout --
	* Pausing node pause-318455 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1217 19:57:15.569770  581779 out.go:360] Setting OutFile to fd 1 ...
	I1217 19:57:15.571695  581779 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 19:57:15.571713  581779 out.go:374] Setting ErrFile to fd 2...
	I1217 19:57:15.571721  581779 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 19:57:15.572219  581779 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22186-372245/.minikube/bin
	I1217 19:57:15.573893  581779 out.go:368] Setting JSON to false
	I1217 19:57:15.573971  581779 mustload.go:66] Loading cluster: pause-318455
	I1217 19:57:15.575062  581779 config.go:182] Loaded profile config "pause-318455": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 19:57:15.575996  581779 cli_runner.go:164] Run: docker container inspect pause-318455 --format={{.State.Status}}
	I1217 19:57:15.600148  581779 host.go:66] Checking if "pause-318455" exists ...
	I1217 19:57:15.600562  581779 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 19:57:15.680604  581779 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:77 OomKillDisable:false NGoroutines:93 SystemTime:2025-12-17 19:57:15.668786807 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1217 19:57:15.681611  581779 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/22186/minikube-v1.37.0-1765965980-22186-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1765965980-22186/minikube-v1.37.0-1765965980-22186-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1765965980-22186-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:pause-318455 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true) want
virtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1217 19:57:15.686065  581779 out.go:179] * Pausing node pause-318455 ... 
	I1217 19:57:15.687680  581779 host.go:66] Checking if "pause-318455" exists ...
	I1217 19:57:15.688098  581779 ssh_runner.go:195] Run: systemctl --version
	I1217 19:57:15.688152  581779 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-318455
	I1217 19:57:15.713425  581779 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33353 SSHKeyPath:/home/jenkins/minikube-integration/22186-372245/.minikube/machines/pause-318455/id_rsa Username:docker}
	I1217 19:57:15.824272  581779 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 19:57:15.838685  581779 pause.go:52] kubelet running: true
	I1217 19:57:15.838755  581779 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1217 19:57:15.985364  581779 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1217 19:57:15.985477  581779 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1217 19:57:16.065174  581779 cri.go:89] found id: "8a890307848d3863ac5dda4d27388c617ecb303c809f2a7fc9317b22fb60fda7"
	I1217 19:57:16.065208  581779 cri.go:89] found id: "dece30b73bfce1ce557b5bbe5dbaf9154f600e34ba66b7ca4ca88e585241097c"
	I1217 19:57:16.065215  581779 cri.go:89] found id: "19248a249a354c5c3da43d5ddc3ff65f75c61b9f2cab9913aab8d6492000822f"
	I1217 19:57:16.065221  581779 cri.go:89] found id: "ede91caa7f2fcc03537da65481e4d60d4a910e278cfbc996cd09ccdce85e42af"
	I1217 19:57:16.065226  581779 cri.go:89] found id: "2cee54e9215fa59351da49c19c47358d8bfa5c9c824fa627c1b9f685d24495b7"
	I1217 19:57:16.065231  581779 cri.go:89] found id: "12f0a8e54bc78853c3f054005a5648e352dda07cdc1713c286582320329e7057"
	I1217 19:57:16.065247  581779 cri.go:89] found id: "76b691e5433f67fe8b6ba2acd73106fa663b879e6b9059c7bba6777dd6049659"
	I1217 19:57:16.065252  581779 cri.go:89] found id: ""
	I1217 19:57:16.065308  581779 ssh_runner.go:195] Run: sudo runc list -f json
	I1217 19:57:16.079760  581779 retry.go:31] will retry after 277.809234ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T19:57:16Z" level=error msg="open /run/runc: no such file or directory"
	I1217 19:57:16.358323  581779 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 19:57:16.374052  581779 pause.go:52] kubelet running: false
	I1217 19:57:16.374147  581779 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1217 19:57:16.523805  581779 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1217 19:57:16.523929  581779 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1217 19:57:16.598618  581779 cri.go:89] found id: "8a890307848d3863ac5dda4d27388c617ecb303c809f2a7fc9317b22fb60fda7"
	I1217 19:57:16.598648  581779 cri.go:89] found id: "dece30b73bfce1ce557b5bbe5dbaf9154f600e34ba66b7ca4ca88e585241097c"
	I1217 19:57:16.598654  581779 cri.go:89] found id: "19248a249a354c5c3da43d5ddc3ff65f75c61b9f2cab9913aab8d6492000822f"
	I1217 19:57:16.598659  581779 cri.go:89] found id: "ede91caa7f2fcc03537da65481e4d60d4a910e278cfbc996cd09ccdce85e42af"
	I1217 19:57:16.598664  581779 cri.go:89] found id: "2cee54e9215fa59351da49c19c47358d8bfa5c9c824fa627c1b9f685d24495b7"
	I1217 19:57:16.598668  581779 cri.go:89] found id: "12f0a8e54bc78853c3f054005a5648e352dda07cdc1713c286582320329e7057"
	I1217 19:57:16.598672  581779 cri.go:89] found id: "76b691e5433f67fe8b6ba2acd73106fa663b879e6b9059c7bba6777dd6049659"
	I1217 19:57:16.598675  581779 cri.go:89] found id: ""
	I1217 19:57:16.598728  581779 ssh_runner.go:195] Run: sudo runc list -f json
	I1217 19:57:16.612116  581779 retry.go:31] will retry after 371.193132ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T19:57:16Z" level=error msg="open /run/runc: no such file or directory"
	I1217 19:57:16.983741  581779 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 19:57:16.996926  581779 pause.go:52] kubelet running: false
	I1217 19:57:16.996996  581779 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1217 19:57:17.114393  581779 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1217 19:57:17.114463  581779 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1217 19:57:17.212326  581779 cri.go:89] found id: "8a890307848d3863ac5dda4d27388c617ecb303c809f2a7fc9317b22fb60fda7"
	I1217 19:57:17.212370  581779 cri.go:89] found id: "dece30b73bfce1ce557b5bbe5dbaf9154f600e34ba66b7ca4ca88e585241097c"
	I1217 19:57:17.212376  581779 cri.go:89] found id: "19248a249a354c5c3da43d5ddc3ff65f75c61b9f2cab9913aab8d6492000822f"
	I1217 19:57:17.212379  581779 cri.go:89] found id: "ede91caa7f2fcc03537da65481e4d60d4a910e278cfbc996cd09ccdce85e42af"
	I1217 19:57:17.212382  581779 cri.go:89] found id: "2cee54e9215fa59351da49c19c47358d8bfa5c9c824fa627c1b9f685d24495b7"
	I1217 19:57:17.212384  581779 cri.go:89] found id: "12f0a8e54bc78853c3f054005a5648e352dda07cdc1713c286582320329e7057"
	I1217 19:57:17.212387  581779 cri.go:89] found id: "76b691e5433f67fe8b6ba2acd73106fa663b879e6b9059c7bba6777dd6049659"
	I1217 19:57:17.212390  581779 cri.go:89] found id: ""
	I1217 19:57:17.212445  581779 ssh_runner.go:195] Run: sudo runc list -f json
	I1217 19:57:17.226016  581779 retry.go:31] will retry after 379.840976ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T19:57:17Z" level=error msg="open /run/runc: no such file or directory"
	I1217 19:57:17.606362  581779 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 19:57:17.620499  581779 pause.go:52] kubelet running: false
	I1217 19:57:17.620568  581779 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1217 19:57:17.735467  581779 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1217 19:57:17.735574  581779 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1217 19:57:17.806959  581779 cri.go:89] found id: "8a890307848d3863ac5dda4d27388c617ecb303c809f2a7fc9317b22fb60fda7"
	I1217 19:57:17.806985  581779 cri.go:89] found id: "dece30b73bfce1ce557b5bbe5dbaf9154f600e34ba66b7ca4ca88e585241097c"
	I1217 19:57:17.806991  581779 cri.go:89] found id: "19248a249a354c5c3da43d5ddc3ff65f75c61b9f2cab9913aab8d6492000822f"
	I1217 19:57:17.806994  581779 cri.go:89] found id: "ede91caa7f2fcc03537da65481e4d60d4a910e278cfbc996cd09ccdce85e42af"
	I1217 19:57:17.806998  581779 cri.go:89] found id: "2cee54e9215fa59351da49c19c47358d8bfa5c9c824fa627c1b9f685d24495b7"
	I1217 19:57:17.807002  581779 cri.go:89] found id: "12f0a8e54bc78853c3f054005a5648e352dda07cdc1713c286582320329e7057"
	I1217 19:57:17.807006  581779 cri.go:89] found id: "76b691e5433f67fe8b6ba2acd73106fa663b879e6b9059c7bba6777dd6049659"
	I1217 19:57:17.807011  581779 cri.go:89] found id: ""
	I1217 19:57:17.807064  581779 ssh_runner.go:195] Run: sudo runc list -f json
	I1217 19:57:18.004984  581779 out.go:203] 
	W1217 19:57:18.008668  581779 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T19:57:17Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T19:57:17Z" level=error msg="open /run/runc: no such file or directory"
	
	W1217 19:57:18.008699  581779 out.go:285] * 
	* 
	W1217 19:57:18.016026  581779 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1217 19:57:18.018398  581779 out.go:203] 

                                                
                                                
** /stderr **
pause_test.go:112: failed to pause minikube with args: "out/minikube-linux-amd64 pause -p pause-318455 --alsologtostderr -v=5" : exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect pause-318455
helpers_test.go:244: (dbg) docker inspect pause-318455:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "304c1eba0f9e3766918c891c0dc954639bf0857670fcf83db3dfc3606bcd6f38",
	        "Created": "2025-12-17T19:56:22.289919361Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 562285,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-17T19:56:22.883512955Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:e3abeb065413b7566dd42e98e204ab3ad174790743f1f5cd427036c11b49d7f1",
	        "ResolvConfPath": "/var/lib/docker/containers/304c1eba0f9e3766918c891c0dc954639bf0857670fcf83db3dfc3606bcd6f38/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/304c1eba0f9e3766918c891c0dc954639bf0857670fcf83db3dfc3606bcd6f38/hostname",
	        "HostsPath": "/var/lib/docker/containers/304c1eba0f9e3766918c891c0dc954639bf0857670fcf83db3dfc3606bcd6f38/hosts",
	        "LogPath": "/var/lib/docker/containers/304c1eba0f9e3766918c891c0dc954639bf0857670fcf83db3dfc3606bcd6f38/304c1eba0f9e3766918c891c0dc954639bf0857670fcf83db3dfc3606bcd6f38-json.log",
	        "Name": "/pause-318455",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-318455:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "pause-318455",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "304c1eba0f9e3766918c891c0dc954639bf0857670fcf83db3dfc3606bcd6f38",
	                "LowerDir": "/var/lib/docker/overlay2/c535588bcfcfb7d693443818b6d4547a101db2c4163e22397e31a2b68e4f3fcf-init/diff:/var/lib/docker/overlay2/29727d664a8119dcd8d22d923cfdfa7d86f99088879bf2a113d907b51116eb38/diff",
	                "MergedDir": "/var/lib/docker/overlay2/c535588bcfcfb7d693443818b6d4547a101db2c4163e22397e31a2b68e4f3fcf/merged",
	                "UpperDir": "/var/lib/docker/overlay2/c535588bcfcfb7d693443818b6d4547a101db2c4163e22397e31a2b68e4f3fcf/diff",
	                "WorkDir": "/var/lib/docker/overlay2/c535588bcfcfb7d693443818b6d4547a101db2c4163e22397e31a2b68e4f3fcf/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-318455",
	                "Source": "/var/lib/docker/volumes/pause-318455/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-318455",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-318455",
	                "name.minikube.sigs.k8s.io": "pause-318455",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "5fb39d8d518d49343fdf2bee5f83efd7757446d2cb2a8f386a55388e3b212d7b",
	            "SandboxKey": "/var/run/docker/netns/5fb39d8d518d",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33353"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33354"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33357"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33355"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33356"
	                    }
	                ]
	            },
	            "Networks": {
	                "pause-318455": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "8206734db8de825ac49e3419599fccb4210ea5530cc02084df1f155f4c026ac7",
	                    "EndpointID": "39ea0553221cbc9a18696672fcfa59a451e7db573594b278557d672e02c603b8",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "MacAddress": "d6:0d:88:f7:63:02",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-318455",
	                        "304c1eba0f9e"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-318455 -n pause-318455
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p pause-318455 -n pause-318455: exit status 2 (379.257676ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p pause-318455 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p pause-318455 logs -n 25: (1.094518805s)
helpers_test.go:261: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬─────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                            ARGS                                                             │           PROFILE           │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼─────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ stop    │ -p scheduled-stop-197684 --schedule 15s -v=5 --alsologtostderr                                                              │ scheduled-stop-197684       │ jenkins │ v1.37.0 │ 17 Dec 25 19:54 UTC │                     │
	│ stop    │ -p scheduled-stop-197684 --schedule 15s -v=5 --alsologtostderr                                                              │ scheduled-stop-197684       │ jenkins │ v1.37.0 │ 17 Dec 25 19:54 UTC │                     │
	│ stop    │ -p scheduled-stop-197684 --schedule 15s -v=5 --alsologtostderr                                                              │ scheduled-stop-197684       │ jenkins │ v1.37.0 │ 17 Dec 25 19:54 UTC │                     │
	│ stop    │ -p scheduled-stop-197684 --cancel-scheduled                                                                                 │ scheduled-stop-197684       │ jenkins │ v1.37.0 │ 17 Dec 25 19:54 UTC │ 17 Dec 25 19:54 UTC │
	│ stop    │ -p scheduled-stop-197684 --schedule 15s -v=5 --alsologtostderr                                                              │ scheduled-stop-197684       │ jenkins │ v1.37.0 │ 17 Dec 25 19:55 UTC │                     │
	│ stop    │ -p scheduled-stop-197684 --schedule 15s -v=5 --alsologtostderr                                                              │ scheduled-stop-197684       │ jenkins │ v1.37.0 │ 17 Dec 25 19:55 UTC │                     │
	│ stop    │ -p scheduled-stop-197684 --schedule 15s -v=5 --alsologtostderr                                                              │ scheduled-stop-197684       │ jenkins │ v1.37.0 │ 17 Dec 25 19:55 UTC │ 17 Dec 25 19:55 UTC │
	│ delete  │ -p scheduled-stop-197684                                                                                                    │ scheduled-stop-197684       │ jenkins │ v1.37.0 │ 17 Dec 25 19:55 UTC │ 17 Dec 25 19:55 UTC │
	│ start   │ -p insufficient-storage-455834 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio            │ insufficient-storage-455834 │ jenkins │ v1.37.0 │ 17 Dec 25 19:55 UTC │                     │
	│ delete  │ -p insufficient-storage-455834                                                                                              │ insufficient-storage-455834 │ jenkins │ v1.37.0 │ 17 Dec 25 19:56 UTC │ 17 Dec 25 19:56 UTC │
	│ start   │ -p force-systemd-env-335995 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                  │ force-systemd-env-335995    │ jenkins │ v1.37.0 │ 17 Dec 25 19:56 UTC │ 17 Dec 25 19:56 UTC │
	│ start   │ -p pause-318455 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio                   │ pause-318455                │ jenkins │ v1.37.0 │ 17 Dec 25 19:56 UTC │ 17 Dec 25 19:57 UTC │
	│ start   │ -p offline-crio-299824 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=crio           │ offline-crio-299824         │ jenkins │ v1.37.0 │ 17 Dec 25 19:56 UTC │ 17 Dec 25 19:57 UTC │
	│ start   │ -p stopped-upgrade-321305 --memory=3072 --vm-driver=docker  --container-runtime=crio                                        │ stopped-upgrade-321305      │ jenkins │ v1.35.0 │ 17 Dec 25 19:56 UTC │ 17 Dec 25 19:56 UTC │
	│ delete  │ -p force-systemd-env-335995                                                                                                 │ force-systemd-env-335995    │ jenkins │ v1.37.0 │ 17 Dec 25 19:56 UTC │ 17 Dec 25 19:56 UTC │
	│ start   │ -p running-upgrade-827750 --memory=3072 --vm-driver=docker  --container-runtime=crio                                        │ running-upgrade-827750      │ jenkins │ v1.35.0 │ 17 Dec 25 19:56 UTC │ 17 Dec 25 19:57 UTC │
	│ stop    │ stopped-upgrade-321305 stop                                                                                                 │ stopped-upgrade-321305      │ jenkins │ v1.35.0 │ 17 Dec 25 19:56 UTC │ 17 Dec 25 19:56 UTC │
	│ start   │ -p stopped-upgrade-321305 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                    │ stopped-upgrade-321305      │ jenkins │ v1.37.0 │ 17 Dec 25 19:56 UTC │ 17 Dec 25 19:57 UTC │
	│ delete  │ -p offline-crio-299824                                                                                                      │ offline-crio-299824         │ jenkins │ v1.37.0 │ 17 Dec 25 19:57 UTC │ 17 Dec 25 19:57 UTC │
	│ start   │ -p cert-expiration-059470 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                      │ cert-expiration-059470      │ jenkins │ v1.37.0 │ 17 Dec 25 19:57 UTC │                     │
	│ start   │ -p pause-318455 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                            │ pause-318455                │ jenkins │ v1.37.0 │ 17 Dec 25 19:57 UTC │ 17 Dec 25 19:57 UTC │
	│ start   │ -p running-upgrade-827750 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                    │ running-upgrade-827750      │ jenkins │ v1.37.0 │ 17 Dec 25 19:57 UTC │                     │
	│ delete  │ -p stopped-upgrade-321305                                                                                                   │ stopped-upgrade-321305      │ jenkins │ v1.37.0 │ 17 Dec 25 19:57 UTC │ 17 Dec 25 19:57 UTC │
	│ start   │ -p force-systemd-flag-134068 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio │ force-systemd-flag-134068   │ jenkins │ v1.37.0 │ 17 Dec 25 19:57 UTC │                     │
	│ pause   │ -p pause-318455 --alsologtostderr -v=5                                                                                      │ pause-318455                │ jenkins │ v1.37.0 │ 17 Dec 25 19:57 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴─────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/17 19:57:14
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1217 19:57:14.036479  580641 out.go:360] Setting OutFile to fd 1 ...
	I1217 19:57:14.036799  580641 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 19:57:14.036811  580641 out.go:374] Setting ErrFile to fd 2...
	I1217 19:57:14.036815  580641 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 19:57:14.037068  580641 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22186-372245/.minikube/bin
	I1217 19:57:14.037557  580641 out.go:368] Setting JSON to false
	I1217 19:57:14.038736  580641 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":5985,"bootTime":1765995449,"procs":297,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1217 19:57:14.038794  580641 start.go:143] virtualization: kvm guest
	I1217 19:57:14.040998  580641 out.go:179] * [force-systemd-flag-134068] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1217 19:57:14.042538  580641 out.go:179]   - MINIKUBE_LOCATION=22186
	I1217 19:57:14.042622  580641 notify.go:221] Checking for updates...
	I1217 19:57:14.045228  580641 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 19:57:14.046754  580641 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22186-372245/kubeconfig
	I1217 19:57:14.048123  580641 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22186-372245/.minikube
	I1217 19:57:14.049411  580641 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1217 19:57:14.050996  580641 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 19:57:14.053254  580641 config.go:182] Loaded profile config "cert-expiration-059470": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 19:57:14.053471  580641 config.go:182] Loaded profile config "pause-318455": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 19:57:14.053590  580641 config.go:182] Loaded profile config "running-upgrade-827750": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1217 19:57:14.053720  580641 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 19:57:14.081358  580641 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1217 19:57:14.081478  580641 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 19:57:14.145067  580641 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:77 SystemTime:2025-12-17 19:57:14.133215392 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1217 19:57:14.145248  580641 docker.go:319] overlay module found
	I1217 19:57:14.150539  580641 out.go:179] * Using the docker driver based on user configuration
	I1217 19:57:14.151806  580641 start.go:309] selected driver: docker
	I1217 19:57:14.151827  580641 start.go:927] validating driver "docker" against <nil>
	I1217 19:57:14.151848  580641 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 19:57:14.152514  580641 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 19:57:14.219248  580641 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:77 SystemTime:2025-12-17 19:57:14.209493418 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1217 19:57:14.219448  580641 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1217 19:57:14.219698  580641 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1217 19:57:14.221600  580641 out.go:179] * Using Docker driver with root privileges
	I1217 19:57:14.223164  580641 cni.go:84] Creating CNI manager for ""
	I1217 19:57:14.223239  580641 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1217 19:57:14.223254  580641 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1217 19:57:14.223375  580641 start.go:353] cluster config:
	{Name:force-systemd-flag-134068 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:force-systemd-flag-134068 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluste
r.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 19:57:14.224745  580641 out.go:179] * Starting "force-systemd-flag-134068" primary control-plane node in "force-systemd-flag-134068" cluster
	I1217 19:57:14.226016  580641 cache.go:134] Beginning downloading kic base image for docker with crio
	I1217 19:57:14.227425  580641 out.go:179] * Pulling base image v0.0.48-1765966054-22186 ...
	I1217 19:57:14.228707  580641 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1217 19:57:14.228752  580641 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22186-372245/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4
	I1217 19:57:14.228763  580641 cache.go:65] Caching tarball of preloaded images
	I1217 19:57:14.228846  580641 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 in local docker daemon
	I1217 19:57:14.228913  580641 preload.go:238] Found /home/jenkins/minikube-integration/22186-372245/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1217 19:57:14.228929  580641 cache.go:68] Finished verifying existence of preloaded tar for v1.34.3 on crio
	I1217 19:57:14.229055  580641 profile.go:143] Saving config to /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/force-systemd-flag-134068/config.json ...
	I1217 19:57:14.229118  580641 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/force-systemd-flag-134068/config.json: {Name:mkd1292bf2c40cbf6298cfdeb86e55351afaaef7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 19:57:14.253559  580641 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 in local docker daemon, skipping pull
	I1217 19:57:14.253583  580641 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 exists in daemon, skipping load
	I1217 19:57:14.253604  580641 cache.go:243] Successfully downloaded all kic artifacts
	I1217 19:57:14.253642  580641 start.go:360] acquireMachinesLock for force-systemd-flag-134068: {Name:mk85e60e74f50e12c3ce481cea309b0a4fa323d3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 19:57:14.253765  580641 start.go:364] duration metric: took 100.493µs to acquireMachinesLock for "force-systemd-flag-134068"
	I1217 19:57:14.253830  580641 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-134068 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:force-systemd-flag-134068 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1217 19:57:14.253936  580641 start.go:125] createHost starting for "" (driver="docker")
	I1217 19:57:13.451883  578056 cli_runner.go:164] Run: docker network inspect running-upgrade-827750 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1217 19:57:13.473013  578056 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1217 19:57:13.478151  578056 kubeadm.go:884] updating cluster {Name:running-upgrade-827750 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:running-upgrade-827750 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMn
etClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1217 19:57:13.478289  578056 preload.go:188] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I1217 19:57:13.478357  578056 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 19:57:13.530315  578056 crio.go:514] all images are preloaded for cri-o runtime.
	I1217 19:57:13.530342  578056 crio.go:433] Images already preloaded, skipping extraction
	I1217 19:57:13.530398  578056 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 19:57:13.572774  578056 crio.go:514] all images are preloaded for cri-o runtime.
	I1217 19:57:13.572801  578056 cache_images.go:86] Images are preloaded, skipping loading
	I1217 19:57:13.572809  578056 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.32.0 crio true true} ...
	I1217 19:57:13.572946  578056 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=running-upgrade-827750 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.0 ClusterName:running-upgrade-827750 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1217 19:57:13.573030  578056 ssh_runner.go:195] Run: crio config
	I1217 19:57:13.633742  578056 cni.go:84] Creating CNI manager for ""
	I1217 19:57:13.633768  578056 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1217 19:57:13.633789  578056 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1217 19:57:13.633821  578056 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.32.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:running-upgrade-827750 NodeName:running-upgrade-827750 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPod
Path:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1217 19:57:13.634027  578056 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "running-upgrade-827750"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.32.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1217 19:57:13.634122  578056 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.0
	I1217 19:57:13.644377  578056 binaries.go:51] Found k8s binaries, skipping transfer
	I1217 19:57:13.644452  578056 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1217 19:57:13.654572  578056 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1217 19:57:13.677302  578056 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1217 19:57:13.700305  578056 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2218 bytes)
	I1217 19:57:13.723176  578056 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1217 19:57:13.727805  578056 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 19:57:13.841125  578056 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 19:57:13.855195  578056 certs.go:69] Setting up /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/running-upgrade-827750 for IP: 192.168.76.2
	I1217 19:57:13.855222  578056 certs.go:195] generating shared ca certs ...
	I1217 19:57:13.855242  578056 certs.go:227] acquiring lock for ca certs: {Name:mk6c0a4a99609de13fb0b54aca94f9165cc7856c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 19:57:13.855424  578056 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22186-372245/.minikube/ca.key
	I1217 19:57:13.855500  578056 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22186-372245/.minikube/proxy-client-ca.key
	I1217 19:57:13.855519  578056 certs.go:257] generating profile certs ...
	I1217 19:57:13.855629  578056 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/running-upgrade-827750/client.key
	I1217 19:57:13.855689  578056 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/running-upgrade-827750/apiserver.key.f9167027
	I1217 19:57:13.855742  578056 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/running-upgrade-827750/proxy-client.key
	I1217 19:57:13.855911  578056 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-372245/.minikube/certs/375797.pem (1338 bytes)
	W1217 19:57:13.855953  578056 certs.go:480] ignoring /home/jenkins/minikube-integration/22186-372245/.minikube/certs/375797_empty.pem, impossibly tiny 0 bytes
	I1217 19:57:13.855968  578056 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-372245/.minikube/certs/ca-key.pem (1675 bytes)
	I1217 19:57:13.855995  578056 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-372245/.minikube/certs/ca.pem (1082 bytes)
	I1217 19:57:13.856020  578056 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-372245/.minikube/certs/cert.pem (1123 bytes)
	I1217 19:57:13.856046  578056 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-372245/.minikube/certs/key.pem (1675 bytes)
	I1217 19:57:13.856121  578056 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-372245/.minikube/files/etc/ssl/certs/3757972.pem (1708 bytes)
	I1217 19:57:13.857148  578056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1217 19:57:13.887842  578056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1217 19:57:13.919670  578056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1217 19:57:13.948683  578056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1217 19:57:13.977444  578056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/running-upgrade-827750/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1217 19:57:14.006333  578056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/running-upgrade-827750/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1217 19:57:14.035253  578056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/running-upgrade-827750/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1217 19:57:14.062472  578056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/running-upgrade-827750/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1217 19:57:14.092023  578056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/files/etc/ssl/certs/3757972.pem --> /usr/share/ca-certificates/3757972.pem (1708 bytes)
	I1217 19:57:14.128338  578056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 19:57:14.158701  578056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/certs/375797.pem --> /usr/share/ca-certificates/375797.pem (1338 bytes)
	I1217 19:57:14.192990  578056 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1217 19:57:14.215335  578056 ssh_runner.go:195] Run: openssl version
	I1217 19:57:14.221783  578056 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3757972.pem
	I1217 19:57:14.231695  578056 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3757972.pem /etc/ssl/certs/3757972.pem
	I1217 19:57:14.241465  578056 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3757972.pem
	I1217 19:57:14.246614  578056 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 17 19:32 /usr/share/ca-certificates/3757972.pem
	I1217 19:57:14.246676  578056 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3757972.pem
	I1217 19:57:14.255254  578056 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1217 19:57:14.266427  578056 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1217 19:57:14.276231  578056 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1217 19:57:14.287003  578056 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1217 19:57:14.291982  578056 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 19:24 /usr/share/ca-certificates/minikubeCA.pem
	I1217 19:57:14.292058  578056 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1217 19:57:14.300601  578056 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1217 19:57:14.311459  578056 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/375797.pem
	I1217 19:57:14.322456  578056 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/375797.pem /etc/ssl/certs/375797.pem
	I1217 19:57:14.332980  578056 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/375797.pem
	I1217 19:57:14.336972  578056 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 17 19:32 /usr/share/ca-certificates/375797.pem
	I1217 19:57:14.337029  578056 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/375797.pem
	I1217 19:57:14.345526  578056 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1217 19:57:14.358223  578056 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1217 19:57:14.363001  578056 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1217 19:57:14.370802  578056 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1217 19:57:14.379650  578056 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1217 19:57:14.388002  578056 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1217 19:57:14.396034  578056 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1217 19:57:14.404970  578056 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1217 19:57:14.412958  578056 kubeadm.go:401] StartCluster: {Name:running-upgrade-827750 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:running-upgrade-827750 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] AP
IServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetC
lientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 19:57:14.413189  578056 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1217 19:57:14.413251  578056 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 19:57:14.461217  578056 cri.go:89] found id: "c9d4d725fcf2d281bd3fb53d8e94cb9634c4b8181bae7de59157c344d9bf4b30"
	I1217 19:57:14.461242  578056 cri.go:89] found id: "eae68eec6ef22c52a261146bfe03bc628cc599576c2d3a98e41f953a5b7891d2"
	I1217 19:57:14.461250  578056 cri.go:89] found id: "67508bb6df0115cfc93f7ac49ab96b029831c7d11d88227acf179d54da743ee9"
	I1217 19:57:14.461254  578056 cri.go:89] found id: "5116728583f140952afb3a56a2abd06655229ce18504b3e9c2ab29f962468de8"
	I1217 19:57:14.461259  578056 cri.go:89] found id: "ed0b2dba94e66a9279b817134e5b8ed559668ef50c6b6a01f82bca41f61dfa2a"
	I1217 19:57:14.461264  578056 cri.go:89] found id: ""
	I1217 19:57:14.461310  578056 ssh_runner.go:195] Run: sudo runc list -f json
	I1217 19:57:14.487065  578056 cri.go:116] JSON = [{"ociVersion":"1.0.2-dev","id":"5116728583f140952afb3a56a2abd06655229ce18504b3e9c2ab29f962468de8","pid":1407,"status":"running","bundle":"/run/containers/storage/overlay-containers/5116728583f140952afb3a56a2abd06655229ce18504b3e9c2ab29f962468de8/userdata","rootfs":"/var/lib/containers/storage/overlay/9b3c150ca7bcba5bf34f331a576916ee09f7697cc3902c309e4d8014670025f3/merged","created":"2025-12-17T19:57:01.521248761Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"99f3a73e","io.kubernetes.container.name":"kube-controller-manager","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"99f3a73e\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.containe
r.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"5116728583f140952afb3a56a2abd06655229ce18504b3e9c2ab29f962468de8","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2025-12-17T19:57:01.441006787Z","io.kubernetes.cri-o.Image":"8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3","io.kubernetes.cri-o.ImageName":"registry.k8s.io/kube-controller-manager:v1.32.0","io.kubernetes.cri-o.ImageRef":"8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-controller-manager\",\"io.kubernetes.pod.name\":\"kube-controller-manager-running-upgrade-827750\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"be85137664fa759b16844d75be32d27d\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-controller-manager-running-upgrade-827750_be85137664fa759b16844d75be32d27d/kube-controller-manager/0
.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-controller-manager\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/9b3c150ca7bcba5bf34f331a576916ee09f7697cc3902c309e4d8014670025f3/merged","io.kubernetes.cri-o.Name":"k8s_kube-controller-manager_kube-controller-manager-running-upgrade-827750_kube-system_be85137664fa759b16844d75be32d27d_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/04155491f8ce28a7a0f843c979a2bc8734a6a1ca33a9b4cbf939cde42626a0de/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"04155491f8ce28a7a0f843c979a2bc8734a6a1ca33a9b4cbf939cde42626a0de","io.kubernetes.cri-o.SandboxName":"k8s_kube-controller-manager-running-upgrade-827750_kube-system_be85137664fa759b16844d75be32d27d_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/ca-certificates\",\"host_path\":\"/etc/ca-
certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/be85137664fa759b16844d75be32d27d/containers/kube-controller-manager/ea608ad0\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/be85137664fa759b16844d75be32d27d/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/ssl/certs\",\"host_path\":\"/etc/ssl/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/kubernetes/controller-manager.conf\",\"host_path\":\"/etc/kubernetes/controller-manager.conf\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/share/ca-certificates\",\"host_path\":\"/usr/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/certs\",\"host_path
\":\"/var/lib/minikube/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/local/share/ca-certificates\",\"host_path\":\"/usr/local/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/libexec/kubernetes/kubelet-plugins/volume/exec\",\"host_path\":\"/usr/libexec/kubernetes/kubelet-plugins/volume/exec\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-controller-manager-running-upgrade-827750","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"be85137664fa759b16844d75be32d27d","kubernetes.io/config.hash":"be85137664fa759b16844d75be32d27d","kubernetes.io/config.seen":"2025-12-17T19:57:00.936598074Z","kubernetes.io/config.source":"file","org.systemd.property.After":"['crio.service']","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.DefaultDependencies":"true","org.
systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"67508bb6df0115cfc93f7ac49ab96b029831c7d11d88227acf179d54da743ee9","pid":1414,"status":"running","bundle":"/run/containers/storage/overlay-containers/67508bb6df0115cfc93f7ac49ab96b029831c7d11d88227acf179d54da743ee9/userdata","rootfs":"/var/lib/containers/storage/overlay/56b437f0a21804dfd160c6f79b644b403fa410251e1dc64ca10c54f8c3eba1a0/merged","created":"2025-12-17T19:57:01.51538618Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"8c4b12d6","io.kubernetes.container.name":"kube-scheduler","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"8c4b12d6\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.containe
r.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"67508bb6df0115cfc93f7ac49ab96b029831c7d11d88227acf179d54da743ee9","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2025-12-17T19:57:01.455514784Z","io.kubernetes.cri-o.Image":"a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5","io.kubernetes.cri-o.ImageName":"registry.k8s.io/kube-scheduler:v1.32.0","io.kubernetes.cri-o.ImageRef":"a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-scheduler\",\"io.kubernetes.pod.name\":\"kube-scheduler-running-upgrade-827750\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"4527a95fbd358783b084b49b25a105e8\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-scheduler-running-upgrade-827750_4527a95fbd358783b084b49b25a105e8/kube-scheduler/0.log","io.kubernetes.cri-o.Metadata":"{\"name
\":\"kube-scheduler\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/56b437f0a21804dfd160c6f79b644b403fa410251e1dc64ca10c54f8c3eba1a0/merged","io.kubernetes.cri-o.Name":"k8s_kube-scheduler_kube-scheduler-running-upgrade-827750_kube-system_4527a95fbd358783b084b49b25a105e8_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/7dd109e08d775d70a4c4f1efae614567783812d56fe4f3a1c673bb2cda2e93b7/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"7dd109e08d775d70a4c4f1efae614567783812d56fe4f3a1c673bb2cda2e93b7","io.kubernetes.cri-o.SandboxName":"k8s_kube-scheduler-running-upgrade-827750_kube-system_4527a95fbd358783b084b49b25a105e8_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/4527a95fbd358783b084b49b25a105e8/etc-hosts\",\"readonly\":false,\"propagation
\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/4527a95fbd358783b084b49b25a105e8/containers/kube-scheduler/7624139d\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/kubernetes/scheduler.conf\",\"host_path\":\"/etc/kubernetes/scheduler.conf\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-scheduler-running-upgrade-827750","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"4527a95fbd358783b084b49b25a105e8","kubernetes.io/config.hash":"4527a95fbd358783b084b49b25a105e8","kubernetes.io/config.seen":"2025-12-17T19:57:00.936599561Z","kubernetes.io/config.source":"file","org.systemd.property.After":"['crio.service']","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.DefaultDependencies":"true","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"}
,{"ociVersion":"1.0.2-dev","id":"c9d4d725fcf2d281bd3fb53d8e94cb9634c4b8181bae7de59157c344d9bf4b30","pid":1910,"status":"running","bundle":"/run/containers/storage/overlay-containers/c9d4d725fcf2d281bd3fb53d8e94cb9634c4b8181bae7de59157c344d9bf4b30/userdata","rootfs":"/var/lib/containers/storage/overlay/028142cc41fbc0d0a6c59ce389a5bc6666d0756a0890e0f03a29a73f2ae979f1/merged","created":"2025-12-17T19:57:12.059624263Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"6c6bf961","io.kubernetes.container.name":"storage-provisioner","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"6c6bf961\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.term
inationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"c9d4d725fcf2d281bd3fb53d8e94cb9634c4b8181bae7de59157c344d9bf4b30","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2025-12-17T19:57:12.008374554Z","io.kubernetes.cri-o.Image":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","io.kubernetes.cri-o.ImageName":"gcr.io/k8s-minikube/storage-provisioner:v5","io.kubernetes.cri-o.ImageRef":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"storage-provisioner\",\"io.kubernetes.pod.name\":\"storage-provisioner\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"cf29c230-6fc0-49cc-9bb0-b76255ee79b3\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_storage-provisioner_cf29c230-6fc0-49cc-9bb0-b76255ee79b3/storage-provisioner/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"storage-provisioner\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/cont
ainers/storage/overlay/028142cc41fbc0d0a6c59ce389a5bc6666d0756a0890e0f03a29a73f2ae979f1/merged","io.kubernetes.cri-o.Name":"k8s_storage-provisioner_storage-provisioner_kube-system_cf29c230-6fc0-49cc-9bb0-b76255ee79b3_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/30bcbbd7661a6c026db69a4c64da3f0835488ae1ae8497663e7619d9e66555b0/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"30bcbbd7661a6c026db69a4c64da3f0835488ae1ae8497663e7619d9e66555b0","io.kubernetes.cri-o.SandboxName":"k8s_storage-provisioner_kube-system_cf29c230-6fc0-49cc-9bb0-b76255ee79b3_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/tmp\",\"host_path\":\"/tmp\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/cf29c230-6fc0-49cc-9bb0-b76255ee79b3/etc-hosts\",\"read
only\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/cf29c230-6fc0-49cc-9bb0-b76255ee79b3/containers/storage-provisioner/71cb6206\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/cf29c230-6fc0-49cc-9bb0-b76255ee79b3/volumes/kubernetes.io~projected/kube-api-access-vc2fd\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"storage-provisioner","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"cf29c230-6fc0-49cc-9bb0-b76255ee79b3","kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"name
space\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n","kubernetes.io/config.seen":"2025-12-17T19:57:11.669095716Z","kubernetes.io/config.source":"api","org.systemd.property.After":"['crio.service']","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.DefaultDependencies":"true","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"eae68eec6ef22c52a261146bfe03bc628cc599576c2d3a98e41f953a5b7891d2","pid":1423,"status":"running","bundle":"/run/containers/storage/overlay-containers/eae68eec6ef22c52a261146bfe03bc628cc599576c2d3a98e41f953a5b7891d2/userdata
","rootfs":"/var/lib/containers/storage/overlay/c636a31b83bb618efd1d6c576f6a7a264ec9e7e9fd1eba071e32b64d8a8f0963/merged","created":"2025-12-17T19:57:01.523735873Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"e68be80f","io.kubernetes.container.name":"etcd","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"e68be80f\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"eae68eec6ef22c52a261146bfe03bc628cc599576c2d3a98e41f953a5b7891d2","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2025-12-17T19:57:01.45831477Z","io.kubernetes.cri-o.Image":"
a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc","io.kubernetes.cri-o.ImageName":"registry.k8s.io/etcd:3.5.16-0","io.kubernetes.cri-o.ImageRef":"a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"etcd\",\"io.kubernetes.pod.name\":\"etcd-running-upgrade-827750\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"c79baa9b7a3546eed685736683689cae\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_etcd-running-upgrade-827750_c79baa9b7a3546eed685736683689cae/etcd/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"etcd\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/c636a31b83bb618efd1d6c576f6a7a264ec9e7e9fd1eba071e32b64d8a8f0963/merged","io.kubernetes.cri-o.Name":"k8s_etcd_etcd-running-upgrade-827750_kube-system_c79baa9b7a3546eed685736683689cae_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/f2e75ac7e7bb3e64d590e3827d990c543b8
6b5bf93057e43d8b8610ed4d80972/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"f2e75ac7e7bb3e64d590e3827d990c543b86b5bf93057e43d8b8610ed4d80972","io.kubernetes.cri-o.SandboxName":"k8s_etcd-running-upgrade-827750_kube-system_c79baa9b7a3546eed685736683689cae_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/c79baa9b7a3546eed685736683689cae/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/c79baa9b7a3546eed685736683689cae/containers/etcd/176c49cf\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/etcd\",\"host_path\":\"/var/lib/minikube/etcd\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/
minikube/certs/etcd\",\"host_path\":\"/var/lib/minikube/certs/etcd\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"etcd-running-upgrade-827750","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"c79baa9b7a3546eed685736683689cae","kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.76.2:2379","kubernetes.io/config.hash":"c79baa9b7a3546eed685736683689cae","kubernetes.io/config.seen":"2025-12-17T19:57:00.936601021Z","kubernetes.io/config.source":"file","org.systemd.property.After":"['crio.service']","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.DefaultDependencies":"true","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"ed0b2dba94e66a9279b817134e5b8ed559668ef50c6b6a01f82bca41f61dfa2a","pid":1400,"status":"running","bundle":"/run/containers/storage/overlay-containers/ed0b2dba94e66a9279b817134e5b8ed5
59668ef50c6b6a01f82bca41f61dfa2a/userdata","rootfs":"/var/lib/containers/storage/overlay/9441aba44a344b14388881f7531a5dcb1eae6023ba21115c59344881096e0143/merged","created":"2025-12-17T19:57:01.510186854Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"bf915d6a","io.kubernetes.container.name":"kube-apiserver","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"bf915d6a\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"ed0b2dba94e66a9279b817134e5b8ed559668ef50c6b6a01f82bca41f61dfa2a","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2025-12-1
7T19:57:01.42715216Z","io.kubernetes.cri-o.Image":"c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4","io.kubernetes.cri-o.ImageName":"registry.k8s.io/kube-apiserver:v1.32.0","io.kubernetes.cri-o.ImageRef":"c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-apiserver\",\"io.kubernetes.pod.name\":\"kube-apiserver-running-upgrade-827750\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"5751ebc3794da7e9b32cdcff3cdc6826\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-apiserver-running-upgrade-827750_5751ebc3794da7e9b32cdcff3cdc6826/kube-apiserver/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-apiserver\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/9441aba44a344b14388881f7531a5dcb1eae6023ba21115c59344881096e0143/merged","io.kubernetes.cri-o.Name":"k8s_kube-apiserver_kube-apiserver-running-upgrade-827750_kube-system_5751ebc3794da7e9b3
2cdcff3cdc6826_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/152d9bf95d29f3a81eb7d6f2bad2a49cad2bbbc511b7bd30e8ee43bdf3cf5e7c/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"152d9bf95d29f3a81eb7d6f2bad2a49cad2bbbc511b7bd30e8ee43bdf3cf5e7c","io.kubernetes.cri-o.SandboxName":"k8s_kube-apiserver-running-upgrade-827750_kube-system_5751ebc3794da7e9b32cdcff3cdc6826_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/5751ebc3794da7e9b32cdcff3cdc6826/containers/kube-apiserver/36fd9baf\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/ca-certificates\",\"host_path\":\"/etc/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/
pods/5751ebc3794da7e9b32cdcff3cdc6826/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/share/ca-certificates\",\"host_path\":\"/usr/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/ssl/certs\",\"host_path\":\"/etc/ssl/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/certs\",\"host_path\":\"/var/lib/minikube/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/local/share/ca-certificates\",\"host_path\":\"/usr/local/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-apiserver-running-upgrade-827750","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"5751ebc3794da7e9b32cdcff3cdc6826","kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168
.76.2:8443","kubernetes.io/config.hash":"5751ebc3794da7e9b32cdcff3cdc6826","kubernetes.io/config.seen":"2025-12-17T19:57:00.936593679Z","kubernetes.io/config.source":"file","org.systemd.property.After":"['crio.service']","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.DefaultDependencies":"true","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"}]
	I1217 19:57:14.487379  578056 cri.go:126] list returned 5 containers
	I1217 19:57:14.487395  578056 cri.go:129] container: {ID:5116728583f140952afb3a56a2abd06655229ce18504b3e9c2ab29f962468de8 Status:running}
	I1217 19:57:14.487429  578056 cri.go:135] skipping {5116728583f140952afb3a56a2abd06655229ce18504b3e9c2ab29f962468de8 running}: state = "running", want "paused"
	I1217 19:57:14.487439  578056 cri.go:129] container: {ID:67508bb6df0115cfc93f7ac49ab96b029831c7d11d88227acf179d54da743ee9 Status:running}
	I1217 19:57:14.487444  578056 cri.go:135] skipping {67508bb6df0115cfc93f7ac49ab96b029831c7d11d88227acf179d54da743ee9 running}: state = "running", want "paused"
	I1217 19:57:14.487451  578056 cri.go:129] container: {ID:c9d4d725fcf2d281bd3fb53d8e94cb9634c4b8181bae7de59157c344d9bf4b30 Status:running}
	I1217 19:57:14.487456  578056 cri.go:135] skipping {c9d4d725fcf2d281bd3fb53d8e94cb9634c4b8181bae7de59157c344d9bf4b30 running}: state = "running", want "paused"
	I1217 19:57:14.487463  578056 cri.go:129] container: {ID:eae68eec6ef22c52a261146bfe03bc628cc599576c2d3a98e41f953a5b7891d2 Status:running}
	I1217 19:57:14.487468  578056 cri.go:135] skipping {eae68eec6ef22c52a261146bfe03bc628cc599576c2d3a98e41f953a5b7891d2 running}: state = "running", want "paused"
	I1217 19:57:14.487476  578056 cri.go:129] container: {ID:ed0b2dba94e66a9279b817134e5b8ed559668ef50c6b6a01f82bca41f61dfa2a Status:running}
	I1217 19:57:14.487480  578056 cri.go:135] skipping {ed0b2dba94e66a9279b817134e5b8ed559668ef50c6b6a01f82bca41f61dfa2a running}: state = "running", want "paused"
	I1217 19:57:14.487525  578056 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1217 19:57:14.498243  578056 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1217 19:57:14.498270  578056 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1217 19:57:14.498334  578056 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1217 19:57:14.508653  578056 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1217 19:57:14.509450  578056 kubeconfig.go:47] verify endpoint returned: get endpoint: "running-upgrade-827750" does not appear in /home/jenkins/minikube-integration/22186-372245/kubeconfig
	I1217 19:57:14.509763  578056 kubeconfig.go:62] /home/jenkins/minikube-integration/22186-372245/kubeconfig needs updating (will repair): [kubeconfig missing "running-upgrade-827750" cluster setting kubeconfig missing "running-upgrade-827750" context setting]
	I1217 19:57:14.510353  578056 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-372245/kubeconfig: {Name:mkbe8926b9014d2af611aee93b1188b72880b6c1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 19:57:14.511219  578056 kapi.go:59] client config for running-upgrade-827750: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22186-372245/.minikube/profiles/running-upgrade-827750/client.crt", KeyFile:"/home/jenkins/minikube-integration/22186-372245/.minikube/profiles/running-upgrade-827750/client.key", CAFile:"/home/jenkins/minikube-integration/22186-372245/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CADat
a:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2817500), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1217 19:57:14.511683  578056 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1217 19:57:14.511707  578056 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1217 19:57:14.511714  578056 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1217 19:57:14.511721  578056 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1217 19:57:14.511726  578056 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1217 19:57:14.512322  578056 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1217 19:57:14.523052  578056 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-12-17 19:56:56.880555879 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-12-17 19:57:13.719334428 +0000
	@@ -41,9 +41,6 @@
	 etcd:
	   local:
	     dataDir: /var/lib/minikube/etcd
	-    extraArgs:
	-      - name: "proxy-refresh-interval"
	-        value: "70000"
	 kubernetesVersion: v1.32.0
	 networking:
	   dnsDomain: cluster.local
	
	-- /stdout --
	I1217 19:57:14.523088  578056 kubeadm.go:1161] stopping kube-system containers ...
	I1217 19:57:14.523111  578056 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1217 19:57:14.523174  578056 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 19:57:14.563149  578056 cri.go:89] found id: "c9d4d725fcf2d281bd3fb53d8e94cb9634c4b8181bae7de59157c344d9bf4b30"
	I1217 19:57:14.563175  578056 cri.go:89] found id: "eae68eec6ef22c52a261146bfe03bc628cc599576c2d3a98e41f953a5b7891d2"
	I1217 19:57:14.563190  578056 cri.go:89] found id: "67508bb6df0115cfc93f7ac49ab96b029831c7d11d88227acf179d54da743ee9"
	I1217 19:57:14.563195  578056 cri.go:89] found id: "5116728583f140952afb3a56a2abd06655229ce18504b3e9c2ab29f962468de8"
	I1217 19:57:14.563200  578056 cri.go:89] found id: "ed0b2dba94e66a9279b817134e5b8ed559668ef50c6b6a01f82bca41f61dfa2a"
	I1217 19:57:14.563205  578056 cri.go:89] found id: ""
	I1217 19:57:14.563213  578056 cri.go:252] Stopping containers: [c9d4d725fcf2d281bd3fb53d8e94cb9634c4b8181bae7de59157c344d9bf4b30 eae68eec6ef22c52a261146bfe03bc628cc599576c2d3a98e41f953a5b7891d2 67508bb6df0115cfc93f7ac49ab96b029831c7d11d88227acf179d54da743ee9 5116728583f140952afb3a56a2abd06655229ce18504b3e9c2ab29f962468de8 ed0b2dba94e66a9279b817134e5b8ed559668ef50c6b6a01f82bca41f61dfa2a]
	I1217 19:57:14.563274  578056 ssh_runner.go:195] Run: which crictl
	I1217 19:57:14.567166  578056 ssh_runner.go:195] Run: sudo /usr/bin/crictl stop --timeout=10 c9d4d725fcf2d281bd3fb53d8e94cb9634c4b8181bae7de59157c344d9bf4b30 eae68eec6ef22c52a261146bfe03bc628cc599576c2d3a98e41f953a5b7891d2 67508bb6df0115cfc93f7ac49ab96b029831c7d11d88227acf179d54da743ee9 5116728583f140952afb3a56a2abd06655229ce18504b3e9c2ab29f962468de8 ed0b2dba94e66a9279b817134e5b8ed559668ef50c6b6a01f82bca41f61dfa2a
	I1217 19:57:13.545636  577087 addons.go:530] duration metric: took 9.870367ms for enable addons: enabled=[]
	I1217 19:57:13.545674  577087 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 19:57:13.672237  577087 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 19:57:13.687519  577087 node_ready.go:35] waiting up to 6m0s for node "pause-318455" to be "Ready" ...
	I1217 19:57:13.695681  577087 node_ready.go:49] node "pause-318455" is "Ready"
	I1217 19:57:13.695709  577087 node_ready.go:38] duration metric: took 8.152816ms for node "pause-318455" to be "Ready" ...
	I1217 19:57:13.695728  577087 api_server.go:52] waiting for apiserver process to appear ...
	I1217 19:57:13.695773  577087 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 19:57:13.709191  577087 api_server.go:72] duration metric: took 173.470144ms to wait for apiserver process to appear ...
	I1217 19:57:13.709221  577087 api_server.go:88] waiting for apiserver healthz status ...
	I1217 19:57:13.709245  577087 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1217 19:57:13.714667  577087 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1217 19:57:13.716322  577087 api_server.go:141] control plane version: v1.34.3
	I1217 19:57:13.716353  577087 api_server.go:131] duration metric: took 7.124457ms to wait for apiserver health ...
	I1217 19:57:13.716365  577087 system_pods.go:43] waiting for kube-system pods to appear ...
	I1217 19:57:13.720175  577087 system_pods.go:59] 7 kube-system pods found
	I1217 19:57:13.720222  577087 system_pods.go:61] "coredns-66bc5c9577-l2sfj" [01975478-9e8c-4475-b2d0-82166c6a60a4] Running
	I1217 19:57:13.720233  577087 system_pods.go:61] "etcd-pause-318455" [1bf76e9b-1fa4-4fff-a305-a3ee8a2f0655] Running
	I1217 19:57:13.720239  577087 system_pods.go:61] "kindnet-z5f74" [2db52b2f-fdbb-4ede-a88c-ca7bf3d7e916] Running
	I1217 19:57:13.720245  577087 system_pods.go:61] "kube-apiserver-pause-318455" [41734dc9-388e-44c1-8ce7-9e34ce94fef9] Running
	I1217 19:57:13.720252  577087 system_pods.go:61] "kube-controller-manager-pause-318455" [46dac58b-1c39-4c0d-bf34-75a3cf600307] Running
	I1217 19:57:13.720258  577087 system_pods.go:61] "kube-proxy-48bqr" [684ab215-b5a6-44fe-a4f6-fae57853d3c4] Running
	I1217 19:57:13.720266  577087 system_pods.go:61] "kube-scheduler-pause-318455" [db2bc6bb-b662-4150-8137-92a2657ea6a8] Running
	I1217 19:57:13.720276  577087 system_pods.go:74] duration metric: took 3.902559ms to wait for pod list to return data ...
	I1217 19:57:13.720289  577087 default_sa.go:34] waiting for default service account to be created ...
	I1217 19:57:13.722722  577087 default_sa.go:45] found service account: "default"
	I1217 19:57:13.722748  577087 default_sa.go:55] duration metric: took 2.447803ms for default service account to be created ...
	I1217 19:57:13.722759  577087 system_pods.go:116] waiting for k8s-apps to be running ...
	I1217 19:57:13.725579  577087 system_pods.go:86] 7 kube-system pods found
	I1217 19:57:13.725615  577087 system_pods.go:89] "coredns-66bc5c9577-l2sfj" [01975478-9e8c-4475-b2d0-82166c6a60a4] Running
	I1217 19:57:13.725623  577087 system_pods.go:89] "etcd-pause-318455" [1bf76e9b-1fa4-4fff-a305-a3ee8a2f0655] Running
	I1217 19:57:13.725629  577087 system_pods.go:89] "kindnet-z5f74" [2db52b2f-fdbb-4ede-a88c-ca7bf3d7e916] Running
	I1217 19:57:13.725635  577087 system_pods.go:89] "kube-apiserver-pause-318455" [41734dc9-388e-44c1-8ce7-9e34ce94fef9] Running
	I1217 19:57:13.725641  577087 system_pods.go:89] "kube-controller-manager-pause-318455" [46dac58b-1c39-4c0d-bf34-75a3cf600307] Running
	I1217 19:57:13.725647  577087 system_pods.go:89] "kube-proxy-48bqr" [684ab215-b5a6-44fe-a4f6-fae57853d3c4] Running
	I1217 19:57:13.725652  577087 system_pods.go:89] "kube-scheduler-pause-318455" [db2bc6bb-b662-4150-8137-92a2657ea6a8] Running
	I1217 19:57:13.725668  577087 system_pods.go:126] duration metric: took 2.900187ms to wait for k8s-apps to be running ...
	I1217 19:57:13.725681  577087 system_svc.go:44] waiting for kubelet service to be running ....
	I1217 19:57:13.725734  577087 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 19:57:13.741883  577087 system_svc.go:56] duration metric: took 16.190658ms WaitForService to wait for kubelet
	I1217 19:57:13.741918  577087 kubeadm.go:587] duration metric: took 206.204517ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1217 19:57:13.741942  577087 node_conditions.go:102] verifying NodePressure condition ...
	I1217 19:57:13.745275  577087 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1217 19:57:13.745308  577087 node_conditions.go:123] node cpu capacity is 8
	I1217 19:57:13.745328  577087 node_conditions.go:105] duration metric: took 3.379877ms to run NodePressure ...
	I1217 19:57:13.745345  577087 start.go:242] waiting for startup goroutines ...
	I1217 19:57:13.745358  577087 start.go:247] waiting for cluster config update ...
	I1217 19:57:13.745372  577087 start.go:256] writing updated cluster config ...
	I1217 19:57:13.745784  577087 ssh_runner.go:195] Run: rm -f paused
	I1217 19:57:13.750323  577087 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1217 19:57:13.751294  577087 kapi.go:59] client config for pause-318455: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22186-372245/.minikube/profiles/pause-318455/client.crt", KeyFile:"/home/jenkins/minikube-integration/22186-372245/.minikube/profiles/pause-318455/client.key", CAFile:"/home/jenkins/minikube-integration/22186-372245/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]s
tring(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2817500), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1217 19:57:13.754843  577087 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-l2sfj" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 19:57:13.760835  577087 pod_ready.go:94] pod "coredns-66bc5c9577-l2sfj" is "Ready"
	I1217 19:57:13.760863  577087 pod_ready.go:86] duration metric: took 5.992534ms for pod "coredns-66bc5c9577-l2sfj" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 19:57:13.763406  577087 pod_ready.go:83] waiting for pod "etcd-pause-318455" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 19:57:13.769010  577087 pod_ready.go:94] pod "etcd-pause-318455" is "Ready"
	I1217 19:57:13.769069  577087 pod_ready.go:86] duration metric: took 5.640147ms for pod "etcd-pause-318455" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 19:57:13.771486  577087 pod_ready.go:83] waiting for pod "kube-apiserver-pause-318455" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 19:57:13.775756  577087 pod_ready.go:94] pod "kube-apiserver-pause-318455" is "Ready"
	I1217 19:57:13.775779  577087 pod_ready.go:86] duration metric: took 4.269421ms for pod "kube-apiserver-pause-318455" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 19:57:13.777919  577087 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-318455" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 19:57:14.155717  577087 pod_ready.go:94] pod "kube-controller-manager-pause-318455" is "Ready"
	I1217 19:57:14.155753  577087 pod_ready.go:86] duration metric: took 377.806813ms for pod "kube-controller-manager-pause-318455" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 19:57:14.355787  577087 pod_ready.go:83] waiting for pod "kube-proxy-48bqr" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 19:57:14.755146  577087 pod_ready.go:94] pod "kube-proxy-48bqr" is "Ready"
	I1217 19:57:14.755190  577087 pod_ready.go:86] duration metric: took 399.368431ms for pod "kube-proxy-48bqr" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 19:57:14.955826  577087 pod_ready.go:83] waiting for pod "kube-scheduler-pause-318455" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 19:57:15.354857  577087 pod_ready.go:94] pod "kube-scheduler-pause-318455" is "Ready"
	I1217 19:57:15.354891  577087 pod_ready.go:86] duration metric: took 399.033505ms for pod "kube-scheduler-pause-318455" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 19:57:15.354904  577087 pod_ready.go:40] duration metric: took 1.604538612s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1217 19:57:15.418013  577087 start.go:625] kubectl: 1.35.0, cluster: 1.34.3 (minor skew: 1)
	I1217 19:57:15.421537  577087 out.go:179] * Done! kubectl is now configured to use "pause-318455" cluster and "default" namespace by default
	I1217 19:57:13.837115  576365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1217 19:57:13.859258  576365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1217 19:57:13.879116  576365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1217 19:57:13.899336  576365 provision.go:87] duration metric: took 386.601257ms to configureAuth
	I1217 19:57:13.899364  576365 ubuntu.go:206] setting minikube options for container-runtime
	I1217 19:57:13.899553  576365 config.go:182] Loaded profile config "cert-expiration-059470": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 19:57:13.899686  576365 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-059470
	I1217 19:57:13.920676  576365 main.go:143] libmachine: Using SSH client type: native
	I1217 19:57:13.921007  576365 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33383 <nil> <nil>}
	I1217 19:57:13.921026  576365 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1217 19:57:14.244267  576365 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1217 19:57:14.244288  576365 machine.go:97] duration metric: took 4.30398135s to provisionDockerMachine
	I1217 19:57:14.244300  576365 client.go:176] duration metric: took 10.199999275s to LocalClient.Create
	I1217 19:57:14.244324  576365 start.go:167] duration metric: took 10.20005099s to libmachine.API.Create "cert-expiration-059470"
	I1217 19:57:14.244332  576365 start.go:293] postStartSetup for "cert-expiration-059470" (driver="docker")
	I1217 19:57:14.244344  576365 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1217 19:57:14.244403  576365 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1217 19:57:14.244441  576365 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-059470
	I1217 19:57:14.266406  576365 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33383 SSHKeyPath:/home/jenkins/minikube-integration/22186-372245/.minikube/machines/cert-expiration-059470/id_rsa Username:docker}
	I1217 19:57:14.375323  576365 ssh_runner.go:195] Run: cat /etc/os-release
	I1217 19:57:14.380505  576365 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1217 19:57:14.380528  576365 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1217 19:57:14.380538  576365 filesync.go:126] Scanning /home/jenkins/minikube-integration/22186-372245/.minikube/addons for local assets ...
	I1217 19:57:14.380597  576365 filesync.go:126] Scanning /home/jenkins/minikube-integration/22186-372245/.minikube/files for local assets ...
	I1217 19:57:14.380682  576365 filesync.go:149] local asset: /home/jenkins/minikube-integration/22186-372245/.minikube/files/etc/ssl/certs/3757972.pem -> 3757972.pem in /etc/ssl/certs
	I1217 19:57:14.380804  576365 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1217 19:57:14.390165  576365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/files/etc/ssl/certs/3757972.pem --> /etc/ssl/certs/3757972.pem (1708 bytes)
	I1217 19:57:14.415803  576365 start.go:296] duration metric: took 171.455568ms for postStartSetup
	I1217 19:57:14.416263  576365 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" cert-expiration-059470
	I1217 19:57:14.437809  576365 profile.go:143] Saving config to /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/cert-expiration-059470/config.json ...
	I1217 19:57:14.438100  576365 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 19:57:14.438161  576365 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-059470
	I1217 19:57:14.462435  576365 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33383 SSHKeyPath:/home/jenkins/minikube-integration/22186-372245/.minikube/machines/cert-expiration-059470/id_rsa Username:docker}
	I1217 19:57:14.567329  576365 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1217 19:57:14.572475  576365 start.go:128] duration metric: took 10.530771942s to createHost
	I1217 19:57:14.572493  576365 start.go:83] releasing machines lock for "cert-expiration-059470", held for 10.530917701s
	I1217 19:57:14.572556  576365 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" cert-expiration-059470
	I1217 19:57:14.592853  576365 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1217 19:57:14.592852  576365 ssh_runner.go:195] Run: cat /version.json
	I1217 19:57:14.592909  576365 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-059470
	I1217 19:57:14.592937  576365 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-059470
	I1217 19:57:14.615547  576365 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33383 SSHKeyPath:/home/jenkins/minikube-integration/22186-372245/.minikube/machines/cert-expiration-059470/id_rsa Username:docker}
	I1217 19:57:14.617888  576365 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33383 SSHKeyPath:/home/jenkins/minikube-integration/22186-372245/.minikube/machines/cert-expiration-059470/id_rsa Username:docker}
	I1217 19:57:14.720708  576365 ssh_runner.go:195] Run: systemctl --version
	I1217 19:57:14.784825  576365 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1217 19:57:14.826608  576365 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1217 19:57:14.831989  576365 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1217 19:57:14.832050  576365 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1217 19:57:14.867116  576365 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1217 19:57:14.867133  576365 start.go:496] detecting cgroup driver to use...
	I1217 19:57:14.867166  576365 detect.go:190] detected "systemd" cgroup driver on host os
	I1217 19:57:14.867216  576365 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1217 19:57:14.889288  576365 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1217 19:57:14.904332  576365 docker.go:218] disabling cri-docker service (if available) ...
	I1217 19:57:14.904390  576365 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1217 19:57:14.924213  576365 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1217 19:57:14.945909  576365 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1217 19:57:15.055921  576365 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1217 19:57:15.153574  576365 docker.go:234] disabling docker service ...
	I1217 19:57:15.153635  576365 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1217 19:57:15.175751  576365 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1217 19:57:15.190459  576365 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1217 19:57:15.280780  576365 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1217 19:57:15.401517  576365 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1217 19:57:15.424566  576365 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1217 19:57:15.449126  576365 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1217 19:57:15.449181  576365 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 19:57:15.466072  576365 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1217 19:57:15.466193  576365 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 19:57:15.486909  576365 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 19:57:15.503896  576365 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 19:57:15.522520  576365 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1217 19:57:15.539383  576365 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 19:57:15.554054  576365 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 19:57:15.576663  576365 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 19:57:15.587513  576365 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1217 19:57:15.596959  576365 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1217 19:57:15.607708  576365 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 19:57:15.723409  576365 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1217 19:57:17.276603  576365 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1.55316649s)
	I1217 19:57:17.276626  576365 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1217 19:57:17.276683  576365 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1217 19:57:17.280920  576365 start.go:564] Will wait 60s for crictl version
	I1217 19:57:17.280973  576365 ssh_runner.go:195] Run: which crictl
	I1217 19:57:17.285110  576365 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1217 19:57:17.315574  576365 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1217 19:57:17.315670  576365 ssh_runner.go:195] Run: crio --version
	I1217 19:57:17.346480  576365 ssh_runner.go:195] Run: crio --version
	I1217 19:57:17.380717  576365 out.go:179] * Preparing Kubernetes v1.34.3 on CRI-O 1.34.3 ...
	
	
	==> CRI-O <==
	Dec 17 19:57:12 pause-318455 crio[2243]: time="2025-12-17T19:57:12.101268637Z" level=info msg="RDT not available in the host system"
	Dec 17 19:57:12 pause-318455 crio[2243]: time="2025-12-17T19:57:12.101277621Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Dec 17 19:57:12 pause-318455 crio[2243]: time="2025-12-17T19:57:12.102327104Z" level=info msg="Conmon does support the --sync option"
	Dec 17 19:57:12 pause-318455 crio[2243]: time="2025-12-17T19:57:12.102351274Z" level=info msg="Conmon does support the --log-global-size-max option"
	Dec 17 19:57:12 pause-318455 crio[2243]: time="2025-12-17T19:57:12.102368664Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Dec 17 19:57:12 pause-318455 crio[2243]: time="2025-12-17T19:57:12.103207133Z" level=info msg="Conmon does support the --sync option"
	Dec 17 19:57:12 pause-318455 crio[2243]: time="2025-12-17T19:57:12.103228079Z" level=info msg="Conmon does support the --log-global-size-max option"
	Dec 17 19:57:12 pause-318455 crio[2243]: time="2025-12-17T19:57:12.108232177Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 17 19:57:12 pause-318455 crio[2243]: time="2025-12-17T19:57:12.108266477Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 17 19:57:12 pause-318455 crio[2243]: time="2025-12-17T19:57:12.108761085Z" level=info msg="Current CRI-O configuration:\n[crio]\n  root = \"/var/lib/containers/storage\"\n  runroot = \"/run/containers/storage\"\n  imagestore = \"\"\n  storage_driver = \"overlay\"\n  log_dir = \"/var/log/crio/pods\"\n  version_file = \"/var/run/crio/version\"\n  version_file_persist = \"\"\n  clean_shutdown_file = \"/var/lib/crio/clean.shutdown\"\n  internal_wipe = true\n  internal_repair = true\n  [crio.api]\n    grpc_max_send_msg_size = 83886080\n    grpc_max_recv_msg_size = 83886080\n    listen = \"/var/run/crio/crio.sock\"\n    stream_address = \"127.0.0.1\"\n    stream_port = \"0\"\n    stream_enable_tls = false\n    stream_tls_cert = \"\"\n    stream_tls_key = \"\"\n    stream_tls_ca = \"\"\n    stream_idle_timeout = \"\"\n  [crio.runtime]\n    no_pivot = false\n    selinux = false\n    log_to_journald = false\n    drop_infra_ctr = true\n    read_only = false\n    hooks_dir = [\"/usr/share/containers/oci/hoo
ks.d\"]\n    default_capabilities = [\"CHOWN\", \"DAC_OVERRIDE\", \"FSETID\", \"FOWNER\", \"SETGID\", \"SETUID\", \"SETPCAP\", \"NET_BIND_SERVICE\", \"KILL\"]\n    add_inheritable_capabilities = false\n    default_sysctls = [\"net.ipv4.ip_unprivileged_port_start=0\"]\n    allowed_devices = [\"/dev/fuse\", \"/dev/net/tun\"]\n    cdi_spec_dirs = [\"/etc/cdi\", \"/var/run/cdi\"]\n    device_ownership_from_security_context = false\n    default_runtime = \"crun\"\n    decryption_keys_path = \"/etc/crio/keys/\"\n    conmon = \"\"\n    conmon_cgroup = \"pod\"\n    seccomp_profile = \"\"\n    privileged_seccomp_profile = \"\"\n    apparmor_profile = \"crio-default\"\n    blockio_config_file = \"\"\n    blockio_reload = false\n    irqbalance_config_file = \"/etc/sysconfig/irqbalance\"\n    rdt_config_file = \"\"\n    cgroup_manager = \"systemd\"\n    default_mounts_file = \"\"\n    container_exits_dir = \"/var/run/crio/exits\"\n    container_attach_socket_dir = \"/var/run/crio\"\n    bind_mount_prefix = \"\"\n    uid_
mappings = \"\"\n    minimum_mappable_uid = -1\n    gid_mappings = \"\"\n    minimum_mappable_gid = -1\n    log_level = \"info\"\n    log_filter = \"\"\n    namespaces_dir = \"/var/run\"\n    pinns_path = \"/usr/bin/pinns\"\n    enable_criu_support = false\n    pids_limit = -1\n    log_size_max = -1\n    ctr_stop_timeout = 30\n    separate_pull_cgroup = \"\"\n    infra_ctr_cpuset = \"\"\n    shared_cpuset = \"\"\n    enable_pod_events = false\n    irqbalance_config_restore_file = \"/etc/sysconfig/orig_irq_banned_cpus\"\n    hostnetwork_disable_selinux = true\n    disable_hostport_mapping = false\n    timezone = \"\"\n    [crio.runtime.runtimes]\n      [crio.runtime.runtimes.crun]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/crun\"\n        runtime_type = \"\"\n        runtime_root = \"/run/crun\"\n        allowed_annotations = [\"io.containers.trace-syscall\"]\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory
= \"12MiB\"\n        no_sync_log = false\n      [crio.runtime.runtimes.runc]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/runc\"\n        runtime_type = \"\"\n        runtime_root = \"/run/runc\"\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory = \"12MiB\"\n        no_sync_log = false\n  [crio.image]\n    default_transport = \"docker://\"\n    global_auth_file = \"\"\n    namespaced_auth_dir = \"/etc/crio/auth\"\n    pause_image = \"registry.k8s.io/pause:3.10.1\"\n    pause_image_auth_file = \"\"\n    pause_command = \"/pause\"\n    signature_policy = \"/etc/crio/policy.json\"\n    signature_policy_dir = \"/etc/crio/policies\"\n    image_volumes = \"mkdir\"\n    big_files_temporary_dir = \"\"\n    auto_reload_registries = false\n    pull_progress_timeout = \"0s\"\n    oci_artifact_mount_support = true\n    short_name_mode = \"enforcing\"\n  [crio.network]\n    cni_default_network = \"\"\n    network_dir = \
"/etc/cni/net.d/\"\n    plugin_dirs = [\"/opt/cni/bin/\"]\n  [crio.metrics]\n    enable_metrics = false\n    metrics_collectors = [\"image_pulls_layer_size\", \"containers_events_dropped_total\", \"containers_oom_total\", \"processes_defunct\", \"operations_total\", \"operations_latency_seconds\", \"operations_latency_seconds_total\", \"operations_errors_total\", \"image_pulls_bytes_total\", \"image_pulls_skipped_bytes_total\", \"image_pulls_failure_total\", \"image_pulls_success_total\", \"image_layer_reuse_total\", \"containers_oom_count_total\", \"containers_seccomp_notifier_count_total\", \"resources_stalled_at_stage\", \"containers_stopped_monitor_count\"]\n    metrics_host = \"127.0.0.1\"\n    metrics_port = 9090\n    metrics_socket = \"\"\n    metrics_cert = \"\"\n    metrics_key = \"\"\n  [crio.tracing]\n    enable_tracing = false\n    tracing_endpoint = \"127.0.0.1:4317\"\n    tracing_sampling_rate_per_million = 0\n  [crio.stats]\n    stats_collection_period = 0\n    collection_period = 0\n  [crio.nr
i]\n    enable_nri = true\n    nri_listen = \"/var/run/nri/nri.sock\"\n    nri_plugin_dir = \"/opt/nri/plugins\"\n    nri_plugin_config_dir = \"/etc/nri/conf.d\"\n    nri_plugin_registration_timeout = \"5s\"\n    nri_plugin_request_timeout = \"2s\"\n    nri_disable_connections = false\n    [crio.nri.default_validator]\n      nri_enable_default_validator = false\n      nri_validator_reject_oci_hook_adjustment = false\n      nri_validator_reject_runtime_default_seccomp_adjustment = false\n      nri_validator_reject_unconfined_seccomp_adjustment = false\n      nri_validator_reject_custom_seccomp_adjustment = false\n      nri_validator_reject_namespace_adjustment = false\n      nri_validator_tolerate_missing_plugins_annotation = \"\"\n"
	Dec 17 19:57:12 pause-318455 crio[2243]: time="2025-12-17T19:57:12.109207667Z" level=info msg="Attempting to restore irqbalance config from /etc/sysconfig/orig_irq_banned_cpus"
	Dec 17 19:57:12 pause-318455 crio[2243]: time="2025-12-17T19:57:12.109261Z" level=info msg="Restore irqbalance config: failed to get current CPU ban list, ignoring"
	Dec 17 19:57:12 pause-318455 crio[2243]: time="2025-12-17T19:57:12.19479336Z" level=info msg="Got pod network &{Name:coredns-66bc5c9577-l2sfj Namespace:kube-system ID:af67eebdf33e512a001c1d3a8d9a79dfc3086ae8b510b9d329e93b9eaa38aa29 UID:01975478-9e8c-4475-b2d0-82166c6a60a4 NetNS:/var/run/netns/ec3f0e86-4ded-4f50-97d5-40914ecc9f0a Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc0009223e8}] Aliases:map[]}"
	Dec 17 19:57:12 pause-318455 crio[2243]: time="2025-12-17T19:57:12.195006917Z" level=info msg="Checking pod kube-system_coredns-66bc5c9577-l2sfj for CNI network kindnet (type=ptp)"
	Dec 17 19:57:12 pause-318455 crio[2243]: time="2025-12-17T19:57:12.19557839Z" level=info msg="Registered SIGHUP reload watcher"
	Dec 17 19:57:12 pause-318455 crio[2243]: time="2025-12-17T19:57:12.195605547Z" level=info msg="Starting seccomp notifier watcher"
	Dec 17 19:57:12 pause-318455 crio[2243]: time="2025-12-17T19:57:12.195674557Z" level=info msg="Create NRI interface"
	Dec 17 19:57:12 pause-318455 crio[2243]: time="2025-12-17T19:57:12.195810437Z" level=info msg="built-in NRI default validator is disabled"
	Dec 17 19:57:12 pause-318455 crio[2243]: time="2025-12-17T19:57:12.195827211Z" level=info msg="runtime interface created"
	Dec 17 19:57:12 pause-318455 crio[2243]: time="2025-12-17T19:57:12.195844027Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Dec 17 19:57:12 pause-318455 crio[2243]: time="2025-12-17T19:57:12.195851985Z" level=info msg="runtime interface starting up..."
	Dec 17 19:57:12 pause-318455 crio[2243]: time="2025-12-17T19:57:12.195859437Z" level=info msg="starting plugins..."
	Dec 17 19:57:12 pause-318455 crio[2243]: time="2025-12-17T19:57:12.195896674Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 17 19:57:12 pause-318455 crio[2243]: time="2025-12-17T19:57:12.196262287Z" level=info msg="No systemd watchdog enabled"
	Dec 17 19:57:12 pause-318455 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	8a890307848d3       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                     14 seconds ago      Running             coredns                   0                   af67eebdf33e5       coredns-66bc5c9577-l2sfj               kube-system
	dece30b73bfce       docker.io/kindest/kindnetd@sha256:7c22558dc06a570d46ea6e8a73b23cdc754eb81f7c08d3441a3171ad359ffc27   25 seconds ago      Running             kindnet-cni               0                   a7145c38c7be2       kindnet-z5f74                          kube-system
	19248a249a354       36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691                                     28 seconds ago      Running             kube-proxy                0                   7df30a119177c       kube-proxy-48bqr                       kube-system
	ede91caa7f2fc       aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78                                     40 seconds ago      Running             kube-scheduler            0                   f4bc5ececc182       kube-scheduler-pause-318455            kube-system
	2cee54e9215fa       aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c                                     40 seconds ago      Running             kube-apiserver            0                   4fd64a24da55e       kube-apiserver-pause-318455            kube-system
	12f0a8e54bc78       5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942                                     40 seconds ago      Running             kube-controller-manager   0                   4d9752cdd0281       kube-controller-manager-pause-318455   kube-system
	76b691e5433f6       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                     40 seconds ago      Running             etcd                      0                   c6dbc67527147       etcd-pause-318455                      kube-system
	
	
	==> coredns [8a890307848d3863ac5dda4d27388c617ecb303c809f2a7fc9317b22fb60fda7] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:47913 - 31426 "HINFO IN 4982629858459019526.2784063241529817807. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.022807116s
	
	
	==> describe nodes <==
	Name:               pause-318455
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-318455
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2e96f676eb7e96389e85fe0658a4ede4c4ba6924
	                    minikube.k8s.io/name=pause-318455
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_17T19_56_44_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Dec 2025 19:56:41 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-318455
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Dec 2025 19:57:13 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Dec 2025 19:57:14 +0000   Wed, 17 Dec 2025 19:56:38 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Dec 2025 19:57:14 +0000   Wed, 17 Dec 2025 19:56:38 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Dec 2025 19:57:14 +0000   Wed, 17 Dec 2025 19:56:38 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Dec 2025 19:57:14 +0000   Wed, 17 Dec 2025 19:57:04 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    pause-318455
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 99cc213c06a11cdf07b2a4d26942818a
	  System UUID:                cf2bbaf9-e321-41e3-b873-6f662bae94bb
	  Boot ID:                    832664c8-407a-4bff-a432-3bbc3f20421e
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.3
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-l2sfj                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     30s
	  kube-system                 etcd-pause-318455                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         36s
	  kube-system                 kindnet-z5f74                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      30s
	  kube-system                 kube-apiserver-pause-318455             250m (3%)     0 (0%)      0 (0%)           0 (0%)         36s
	  kube-system                 kube-controller-manager-pause-318455    200m (2%)     0 (0%)      0 (0%)           0 (0%)         36s
	  kube-system                 kube-proxy-48bqr                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 kube-scheduler-pause-318455             100m (1%)     0 (0%)      0 (0%)           0 (0%)         36s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 27s                kube-proxy       
	  Normal  NodeHasSufficientMemory  41s (x8 over 41s)  kubelet          Node pause-318455 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    41s (x8 over 41s)  kubelet          Node pause-318455 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     41s (x8 over 41s)  kubelet          Node pause-318455 status is now: NodeHasSufficientPID
	  Normal  Starting                 36s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  36s                kubelet          Node pause-318455 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    36s                kubelet          Node pause-318455 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     36s                kubelet          Node pause-318455 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           31s                node-controller  Node pause-318455 event: Registered Node pause-318455 in Controller
	  Normal  NodeReady                15s                kubelet          Node pause-318455 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 02 bf cf fd 8a f3 08 06
	[  +0.000372] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 46 d7 50 f9 50 96 08 06
	[Dec17 19:26] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000011] ll header: 00000000: 12 b8 6e 1b fb 93 de a2 46 23 bd 1e 08 00
	[  +1.015318] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 12 b8 6e 1b fb 93 de a2 46 23 bd 1e 08 00
	[  +1.023837] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 12 b8 6e 1b fb 93 de a2 46 23 bd 1e 08 00
	[  +1.023872] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 12 b8 6e 1b fb 93 de a2 46 23 bd 1e 08 00
	[  +1.023881] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 12 b8 6e 1b fb 93 de a2 46 23 bd 1e 08 00
	[  +1.023899] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 12 b8 6e 1b fb 93 de a2 46 23 bd 1e 08 00
	[  +2.047807] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: 12 b8 6e 1b fb 93 de a2 46 23 bd 1e 08 00
	[  +4.031540] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000022] ll header: 00000000: 12 b8 6e 1b fb 93 de a2 46 23 bd 1e 08 00
	[  +8.319118] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: 12 b8 6e 1b fb 93 de a2 46 23 bd 1e 08 00
	[ +16.382218] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 12 b8 6e 1b fb 93 de a2 46 23 bd 1e 08 00
	[Dec17 19:27] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 12 b8 6e 1b fb 93 de a2 46 23 bd 1e 08 00
	
	
	==> etcd [76b691e5433f67fe8b6ba2acd73106fa663b879e6b9059c7bba6777dd6049659] <==
	{"level":"info","ts":"2025-12-17T19:56:50.161032Z","caller":"traceutil/trace.go:172","msg":"trace[430430009] transaction","detail":"{read_only:false; response_revision:382; number_of_response:1; }","duration":"320.311044ms","start":"2025-12-17T19:56:49.840706Z","end":"2025-12-17T19:56:50.161017Z","steps":["trace[430430009] 'process raft request'  (duration: 320.100512ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-17T19:56:50.161067Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-12-17T19:56:49.842244Z","time spent":"318.771899ms","remote":"127.0.0.1:59842","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":4086,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/replicasets/kube-system/coredns-66bc5c9577\" mod_revision:373 > success:<request_put:<key:\"/registry/replicasets/kube-system/coredns-66bc5c9577\" value_size:4026 >> failure:<request_range:<key:\"/registry/replicasets/kube-system/coredns-66bc5c9577\" > >"}
	{"level":"warn","ts":"2025-12-17T19:56:50.161067Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-12-17T19:56:49.838348Z","time spent":"322.668138ms","remote":"127.0.0.1:59828","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":2863,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/daemonsets/kube-system/kube-proxy\" mod_revision:371 > success:<request_put:<key:\"/registry/daemonsets/kube-system/kube-proxy\" value_size:2812 >> failure:<request_range:<key:\"/registry/daemonsets/kube-system/kube-proxy\" > >"}
	{"level":"warn","ts":"2025-12-17T19:56:50.161135Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-12-17T19:56:49.840673Z","time spent":"320.389902ms","remote":"127.0.0.1:59828","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":4704,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/daemonsets/kube-system/kindnet\" mod_revision:370 > success:<request_put:<key:\"/registry/daemonsets/kube-system/kindnet\" value_size:4656 >> failure:<request_range:<key:\"/registry/daemonsets/kube-system/kindnet\" > >"}
	{"level":"warn","ts":"2025-12-17T19:56:50.488248Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"233.598822ms","expected-duration":"100ms","prefix":"","request":"header:<ID:9722597791944127917 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/deployments/kube-system/coredns\" mod_revision:339 > success:<request_put:<key:\"/registry/deployments/kube-system/coredns\" value_size:4274 >> failure:<request_range:<key:\"/registry/deployments/kube-system/coredns\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-12-17T19:56:50.488333Z","caller":"traceutil/trace.go:172","msg":"trace[431016538] transaction","detail":"{read_only:false; response_revision:385; number_of_response:1; }","duration":"319.944753ms","start":"2025-12-17T19:56:50.168374Z","end":"2025-12-17T19:56:50.488319Z","steps":["trace[431016538] 'process raft request'  (duration: 86.207609ms)","trace[431016538] 'compare'  (duration: 233.241961ms)"],"step_count":2}
	{"level":"warn","ts":"2025-12-17T19:56:50.488396Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-12-17T19:56:50.168357Z","time spent":"320.003283ms","remote":"127.0.0.1:59792","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":4323,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/deployments/kube-system/coredns\" mod_revision:339 > success:<request_put:<key:\"/registry/deployments/kube-system/coredns\" value_size:4274 >> failure:<request_range:<key:\"/registry/deployments/kube-system/coredns\" > >"}
	{"level":"info","ts":"2025-12-17T19:56:50.497262Z","caller":"traceutil/trace.go:172","msg":"trace[2084870308] linearizableReadLoop","detail":"{readStateIndex:396; appliedIndex:396; }","duration":"122.216703ms","start":"2025-12-17T19:56:50.375024Z","end":"2025-12-17T19:56:50.497241Z","steps":["trace[2084870308] 'read index received'  (duration: 122.208275ms)","trace[2084870308] 'applied index is now lower than readState.Index'  (duration: 7.393µs)"],"step_count":2}
	{"level":"warn","ts":"2025-12-17T19:56:50.497444Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"122.403737ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/kindnet\" limit:1 ","response":"range_response_count:1 size:520"}
	{"level":"info","ts":"2025-12-17T19:56:50.497467Z","caller":"traceutil/trace.go:172","msg":"trace[487005176] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/kindnet; range_end:; response_count:1; response_revision:385; }","duration":"122.447566ms","start":"2025-12-17T19:56:50.375013Z","end":"2025-12-17T19:56:50.497460Z","steps":["trace[487005176] 'agreement among raft nodes before linearized reading'  (duration: 122.306485ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-17T19:56:50.497447Z","caller":"traceutil/trace.go:172","msg":"trace[1186271414] transaction","detail":"{read_only:false; response_revision:386; number_of_response:1; }","duration":"321.169939ms","start":"2025-12-17T19:56:50.176257Z","end":"2025-12-17T19:56:50.497427Z","steps":["trace[1186271414] 'process raft request'  (duration: 321.038482ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-17T19:56:50.497556Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-12-17T19:56:50.176236Z","time spent":"321.257311ms","remote":"127.0.0.1:59298","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":5325,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/kindnet-z5f74\" mod_revision:380 > success:<request_put:<key:\"/registry/pods/kube-system/kindnet-z5f74\" value_size:5277 >> failure:<request_range:<key:\"/registry/pods/kube-system/kindnet-z5f74\" > >"}
	{"level":"info","ts":"2025-12-17T19:56:50.519550Z","caller":"traceutil/trace.go:172","msg":"trace[28871579] transaction","detail":"{read_only:false; response_revision:387; number_of_response:1; }","duration":"225.902922ms","start":"2025-12-17T19:56:50.293629Z","end":"2025-12-17T19:56:50.519532Z","steps":["trace[28871579] 'process raft request'  (duration: 225.802694ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-17T19:56:50.795054Z","caller":"traceutil/trace.go:172","msg":"trace[1099269762] transaction","detail":"{read_only:false; response_revision:388; number_of_response:1; }","duration":"256.014028ms","start":"2025-12-17T19:56:50.539019Z","end":"2025-12-17T19:56:50.795033Z","steps":["trace[1099269762] 'process raft request'  (duration: 211.636548ms)","trace[1099269762] 'compare'  (duration: 44.08424ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-17T19:56:50.795473Z","caller":"traceutil/trace.go:172","msg":"trace[781548046] transaction","detail":"{read_only:false; response_revision:389; number_of_response:1; }","duration":"161.911211ms","start":"2025-12-17T19:56:50.633549Z","end":"2025-12-17T19:56:50.795460Z","steps":["trace[781548046] 'process raft request'  (duration: 161.853445ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-17T19:56:51.111584Z","caller":"traceutil/trace.go:172","msg":"trace[285237607] transaction","detail":"{read_only:false; response_revision:395; number_of_response:1; }","duration":"244.95447ms","start":"2025-12-17T19:56:50.866613Z","end":"2025-12-17T19:56:51.111568Z","steps":["trace[285237607] 'process raft request'  (duration: 244.881218ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-17T19:56:51.111810Z","caller":"traceutil/trace.go:172","msg":"trace[1348301164] transaction","detail":"{read_only:false; response_revision:394; number_of_response:1; }","duration":"273.737769ms","start":"2025-12-17T19:56:50.838054Z","end":"2025-12-17T19:56:51.111791Z","steps":["trace[1348301164] 'process raft request'  (duration: 272.590905ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-17T19:56:51.271073Z","caller":"traceutil/trace.go:172","msg":"trace[182069330] linearizableReadLoop","detail":"{readStateIndex:406; appliedIndex:406; }","duration":"160.622163ms","start":"2025-12-17T19:56:51.110429Z","end":"2025-12-17T19:56:51.271051Z","steps":["trace[182069330] 'read index received'  (duration: 160.613742ms)","trace[182069330] 'applied index is now lower than readState.Index'  (duration: 6.98µs)"],"step_count":2}
	{"level":"warn","ts":"2025-12-17T19:56:51.277929Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"247.246211ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/deployments/kube-system/coredns\" limit:1 ","response":"range_response_count:1 size:4400"}
	{"level":"info","ts":"2025-12-17T19:56:51.277982Z","caller":"traceutil/trace.go:172","msg":"trace[1236643297] transaction","detail":"{read_only:false; response_revision:398; number_of_response:1; }","duration":"162.389749ms","start":"2025-12-17T19:56:51.115579Z","end":"2025-12-17T19:56:51.277969Z","steps":["trace[1236643297] 'process raft request'  (duration: 162.354958ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-17T19:56:51.277980Z","caller":"traceutil/trace.go:172","msg":"trace[1605287942] transaction","detail":"{read_only:false; response_revision:396; number_of_response:1; }","duration":"391.463699ms","start":"2025-12-17T19:56:50.886494Z","end":"2025-12-17T19:56:51.277958Z","steps":["trace[1605287942] 'process raft request'  (duration: 384.65119ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-17T19:56:51.277994Z","caller":"traceutil/trace.go:172","msg":"trace[347070194] range","detail":"{range_begin:/registry/deployments/kube-system/coredns; range_end:; response_count:1; response_revision:395; }","duration":"247.330995ms","start":"2025-12-17T19:56:51.030655Z","end":"2025-12-17T19:56:51.277986Z","steps":["trace[347070194] 'agreement among raft nodes before linearized reading'  (duration: 240.463836ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-17T19:56:51.278119Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-12-17T19:56:50.886468Z","time spent":"391.567455ms","remote":"127.0.0.1:59792","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":4385,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/deployments/kube-system/coredns\" mod_revision:393 > success:<request_put:<key:\"/registry/deployments/kube-system/coredns\" value_size:4336 >> failure:<request_range:<key:\"/registry/deployments/kube-system/coredns\" > >"}
	{"level":"info","ts":"2025-12-17T19:56:51.278138Z","caller":"traceutil/trace.go:172","msg":"trace[853644693] transaction","detail":"{read_only:false; number_of_response:1; response_revision:397; }","duration":"164.64451ms","start":"2025-12-17T19:56:51.113487Z","end":"2025-12-17T19:56:51.278131Z","steps":["trace[853644693] 'process raft request'  (duration: 164.402931ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-17T19:57:08.019169Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"220.571668ms","expected-duration":"100ms","prefix":"","request":"header:<ID:9722597791944128097 > lease_revoke:<id:06ed9b2de2d3b5db>","response":"size:28"}
	
	
	==> kernel <==
	 19:57:19 up  1:39,  0 user,  load average: 6.83, 2.62, 1.86
	Linux pause-318455 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [dece30b73bfce1ce557b5bbe5dbaf9154f600e34ba66b7ca4ca88e585241097c] <==
	I1217 19:56:53.479885       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1217 19:56:53.480294       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1217 19:56:53.480440       1 main.go:148] setting mtu 1500 for CNI 
	I1217 19:56:53.480461       1 main.go:178] kindnetd IP family: "ipv4"
	I1217 19:56:53.480480       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-17T19:56:53Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1217 19:56:53.684459       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1217 19:56:53.684495       1 controller.go:381] "Waiting for informer caches to sync"
	I1217 19:56:53.684550       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1217 19:56:53.684775       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1217 19:56:54.061840       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1217 19:56:54.061875       1 metrics.go:72] Registering metrics
	I1217 19:56:54.061946       1 controller.go:711] "Syncing nftables rules"
	I1217 19:57:03.692213       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1217 19:57:03.692290       1 main.go:301] handling current node
	I1217 19:57:13.691194       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1217 19:57:13.691240       1 main.go:301] handling current node
	
	
	==> kube-apiserver [2cee54e9215fa59351da49c19c47358d8bfa5c9c824fa627c1b9f685d24495b7] <==
	I1217 19:56:41.184898       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1217 19:56:41.184955       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1217 19:56:41.191508       1 controller.go:667] quota admission added evaluator for: namespaces
	I1217 19:56:41.210300       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1217 19:56:41.210515       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1217 19:56:41.217500       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1217 19:56:41.217921       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1217 19:56:41.353763       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1217 19:56:41.988515       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1217 19:56:41.994211       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1217 19:56:41.994237       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1217 19:56:42.473270       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1217 19:56:42.506950       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1217 19:56:42.590661       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1217 19:56:42.597508       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1217 19:56:42.598509       1 controller.go:667] quota admission added evaluator for: endpoints
	I1217 19:56:42.602296       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1217 19:56:43.097352       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1217 19:56:43.477915       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1217 19:56:43.490835       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1217 19:56:43.498949       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1217 19:56:48.799626       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1217 19:56:49.251552       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1217 19:56:49.257688       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1217 19:56:49.365607       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [12f0a8e54bc78853c3f054005a5648e352dda07cdc1713c286582320329e7057] <==
	I1217 19:56:48.103969       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1217 19:56:48.104774       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1217 19:56:48.110506       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1217 19:56:48.113328       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="pause-318455" podCIDRs=["10.244.0.0/24"]
	I1217 19:56:48.114655       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1217 19:56:48.115786       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1217 19:56:48.124174       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1217 19:56:48.145329       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1217 19:56:48.146512       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1217 19:56:48.146560       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1217 19:56:48.146615       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1217 19:56:48.146625       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1217 19:56:48.146665       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1217 19:56:48.146684       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1217 19:56:48.148226       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1217 19:56:48.148339       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1217 19:56:48.148361       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1217 19:56:48.149592       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1217 19:56:48.149613       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1217 19:56:48.150787       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1217 19:56:48.152371       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1217 19:56:48.155250       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1217 19:56:48.155262       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1217 19:56:48.158594       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1217 19:57:08.099434       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [19248a249a354c5c3da43d5ddc3ff65f75c61b9f2cab9913aab8d6492000822f] <==
	I1217 19:56:51.319152       1 server_linux.go:53] "Using iptables proxy"
	I1217 19:56:51.382519       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1217 19:56:51.483330       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1217 19:56:51.483387       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1217 19:56:51.483502       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1217 19:56:51.508825       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1217 19:56:51.508895       1 server_linux.go:132] "Using iptables Proxier"
	I1217 19:56:51.515489       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1217 19:56:51.515962       1 server.go:527] "Version info" version="v1.34.3"
	I1217 19:56:51.516390       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1217 19:56:51.519049       1 config.go:106] "Starting endpoint slice config controller"
	I1217 19:56:51.519336       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1217 19:56:51.519154       1 config.go:403] "Starting serviceCIDR config controller"
	I1217 19:56:51.519370       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1217 19:56:51.519204       1 config.go:200] "Starting service config controller"
	I1217 19:56:51.519384       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1217 19:56:51.519228       1 config.go:309] "Starting node config controller"
	I1217 19:56:51.519396       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1217 19:56:51.620329       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1217 19:56:51.620367       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1217 19:56:51.620368       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1217 19:56:51.620378       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [ede91caa7f2fcc03537da65481e4d60d4a910e278cfbc996cd09ccdce85e42af] <==
	E1217 19:56:41.170811       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1217 19:56:41.172214       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1217 19:56:41.172374       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1217 19:56:41.175514       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1217 19:56:41.175699       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1217 19:56:41.175784       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1217 19:56:41.176004       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1217 19:56:41.176011       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1217 19:56:41.176114       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1217 19:56:41.176203       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1217 19:56:41.176233       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1217 19:56:41.176286       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1217 19:56:41.176301       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1217 19:56:41.176354       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1217 19:56:41.176363       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1217 19:56:41.176482       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1217 19:56:41.177412       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1217 19:56:41.177861       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1217 19:56:42.000654       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1217 19:56:42.023953       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1217 19:56:42.112328       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1217 19:56:42.234511       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1217 19:56:42.280027       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1217 19:56:42.287228       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	I1217 19:56:42.765170       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 17 19:56:44 pause-318455 kubelet[1317]: E1217 19:56:44.372231    1317 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"etcd-pause-318455\" already exists" pod="kube-system/etcd-pause-318455"
	Dec 17 19:56:44 pause-318455 kubelet[1317]: I1217 19:56:44.455376    1317 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-pause-318455" podStartSLOduration=1.455351497 podStartE2EDuration="1.455351497s" podCreationTimestamp="2025-12-17 19:56:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-17 19:56:44.433324652 +0000 UTC m=+1.185729850" watchObservedRunningTime="2025-12-17 19:56:44.455351497 +0000 UTC m=+1.207756691"
	Dec 17 19:56:44 pause-318455 kubelet[1317]: I1217 19:56:44.467067    1317 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-pause-318455" podStartSLOduration=1.467042516 podStartE2EDuration="1.467042516s" podCreationTimestamp="2025-12-17 19:56:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-17 19:56:44.466938778 +0000 UTC m=+1.219344006" watchObservedRunningTime="2025-12-17 19:56:44.467042516 +0000 UTC m=+1.219447706"
	Dec 17 19:56:44 pause-318455 kubelet[1317]: I1217 19:56:44.467306    1317 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-pause-318455" podStartSLOduration=1.467297263 podStartE2EDuration="1.467297263s" podCreationTimestamp="2025-12-17 19:56:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-17 19:56:44.455739324 +0000 UTC m=+1.208144541" watchObservedRunningTime="2025-12-17 19:56:44.467297263 +0000 UTC m=+1.219702460"
	Dec 17 19:56:44 pause-318455 kubelet[1317]: I1217 19:56:44.493869    1317 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-pause-318455" podStartSLOduration=1.493842113 podStartE2EDuration="1.493842113s" podCreationTimestamp="2025-12-17 19:56:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-17 19:56:44.480271519 +0000 UTC m=+1.232676734" watchObservedRunningTime="2025-12-17 19:56:44.493842113 +0000 UTC m=+1.246247308"
	Dec 17 19:56:48 pause-318455 kubelet[1317]: I1217 19:56:48.134336    1317 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Dec 17 19:56:48 pause-318455 kubelet[1317]: I1217 19:56:48.135180    1317 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Dec 17 19:56:49 pause-318455 kubelet[1317]: I1217 19:56:49.768362    1317 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/684ab215-b5a6-44fe-a4f6-fae57853d3c4-xtables-lock\") pod \"kube-proxy-48bqr\" (UID: \"684ab215-b5a6-44fe-a4f6-fae57853d3c4\") " pod="kube-system/kube-proxy-48bqr"
	Dec 17 19:56:49 pause-318455 kubelet[1317]: I1217 19:56:49.768434    1317 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/684ab215-b5a6-44fe-a4f6-fae57853d3c4-kube-proxy\") pod \"kube-proxy-48bqr\" (UID: \"684ab215-b5a6-44fe-a4f6-fae57853d3c4\") " pod="kube-system/kube-proxy-48bqr"
	Dec 17 19:56:49 pause-318455 kubelet[1317]: I1217 19:56:49.768462    1317 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/684ab215-b5a6-44fe-a4f6-fae57853d3c4-lib-modules\") pod \"kube-proxy-48bqr\" (UID: \"684ab215-b5a6-44fe-a4f6-fae57853d3c4\") " pod="kube-system/kube-proxy-48bqr"
	Dec 17 19:56:49 pause-318455 kubelet[1317]: I1217 19:56:49.768488    1317 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nkfz8\" (UniqueName: \"kubernetes.io/projected/684ab215-b5a6-44fe-a4f6-fae57853d3c4-kube-api-access-nkfz8\") pod \"kube-proxy-48bqr\" (UID: \"684ab215-b5a6-44fe-a4f6-fae57853d3c4\") " pod="kube-system/kube-proxy-48bqr"
	Dec 17 19:56:50 pause-318455 kubelet[1317]: I1217 19:56:50.272588    1317 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/2db52b2f-fdbb-4ede-a88c-ca7bf3d7e916-cni-cfg\") pod \"kindnet-z5f74\" (UID: \"2db52b2f-fdbb-4ede-a88c-ca7bf3d7e916\") " pod="kube-system/kindnet-z5f74"
	Dec 17 19:56:50 pause-318455 kubelet[1317]: I1217 19:56:50.272652    1317 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2db52b2f-fdbb-4ede-a88c-ca7bf3d7e916-xtables-lock\") pod \"kindnet-z5f74\" (UID: \"2db52b2f-fdbb-4ede-a88c-ca7bf3d7e916\") " pod="kube-system/kindnet-z5f74"
	Dec 17 19:56:50 pause-318455 kubelet[1317]: I1217 19:56:50.272683    1317 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fnkfl\" (UniqueName: \"kubernetes.io/projected/2db52b2f-fdbb-4ede-a88c-ca7bf3d7e916-kube-api-access-fnkfl\") pod \"kindnet-z5f74\" (UID: \"2db52b2f-fdbb-4ede-a88c-ca7bf3d7e916\") " pod="kube-system/kindnet-z5f74"
	Dec 17 19:56:50 pause-318455 kubelet[1317]: I1217 19:56:50.272704    1317 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2db52b2f-fdbb-4ede-a88c-ca7bf3d7e916-lib-modules\") pod \"kindnet-z5f74\" (UID: \"2db52b2f-fdbb-4ede-a88c-ca7bf3d7e916\") " pod="kube-system/kindnet-z5f74"
	Dec 17 19:56:51 pause-318455 kubelet[1317]: I1217 19:56:51.409103    1317 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-48bqr" podStartSLOduration=2.40906253 podStartE2EDuration="2.40906253s" podCreationTimestamp="2025-12-17 19:56:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-17 19:56:51.408682999 +0000 UTC m=+8.161088206" watchObservedRunningTime="2025-12-17 19:56:51.40906253 +0000 UTC m=+8.161467728"
	Dec 17 19:56:53 pause-318455 kubelet[1317]: I1217 19:56:53.400321    1317 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-z5f74" podStartSLOduration=2.071066059 podStartE2EDuration="4.400299746s" podCreationTimestamp="2025-12-17 19:56:49 +0000 UTC" firstStartedPulling="2025-12-17 19:56:50.864320375 +0000 UTC m=+7.616725554" lastFinishedPulling="2025-12-17 19:56:53.193554067 +0000 UTC m=+9.945959241" observedRunningTime="2025-12-17 19:56:53.400157635 +0000 UTC m=+10.152562852" watchObservedRunningTime="2025-12-17 19:56:53.400299746 +0000 UTC m=+10.152704952"
	Dec 17 19:57:04 pause-318455 kubelet[1317]: I1217 19:57:04.246427    1317 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Dec 17 19:57:04 pause-318455 kubelet[1317]: I1217 19:57:04.372609    1317 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4fgfq\" (UniqueName: \"kubernetes.io/projected/01975478-9e8c-4475-b2d0-82166c6a60a4-kube-api-access-4fgfq\") pod \"coredns-66bc5c9577-l2sfj\" (UID: \"01975478-9e8c-4475-b2d0-82166c6a60a4\") " pod="kube-system/coredns-66bc5c9577-l2sfj"
	Dec 17 19:57:04 pause-318455 kubelet[1317]: I1217 19:57:04.372684    1317 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/01975478-9e8c-4475-b2d0-82166c6a60a4-config-volume\") pod \"coredns-66bc5c9577-l2sfj\" (UID: \"01975478-9e8c-4475-b2d0-82166c6a60a4\") " pod="kube-system/coredns-66bc5c9577-l2sfj"
	Dec 17 19:57:05 pause-318455 kubelet[1317]: I1217 19:57:05.440816    1317 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-l2sfj" podStartSLOduration=16.440780819 podStartE2EDuration="16.440780819s" podCreationTimestamp="2025-12-17 19:56:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-17 19:57:05.440408435 +0000 UTC m=+22.192813647" watchObservedRunningTime="2025-12-17 19:57:05.440780819 +0000 UTC m=+22.193186015"
	Dec 17 19:57:15 pause-318455 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 17 19:57:15 pause-318455 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 17 19:57:15 pause-318455 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 19:57:15 pause-318455 systemd[1]: kubelet.service: Consumed 1.527s CPU time.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-318455 -n pause-318455
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-318455 -n pause-318455: exit status 2 (355.320154ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context pause-318455 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect pause-318455
helpers_test.go:244: (dbg) docker inspect pause-318455:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "304c1eba0f9e3766918c891c0dc954639bf0857670fcf83db3dfc3606bcd6f38",
	        "Created": "2025-12-17T19:56:22.289919361Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 562285,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-17T19:56:22.883512955Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:e3abeb065413b7566dd42e98e204ab3ad174790743f1f5cd427036c11b49d7f1",
	        "ResolvConfPath": "/var/lib/docker/containers/304c1eba0f9e3766918c891c0dc954639bf0857670fcf83db3dfc3606bcd6f38/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/304c1eba0f9e3766918c891c0dc954639bf0857670fcf83db3dfc3606bcd6f38/hostname",
	        "HostsPath": "/var/lib/docker/containers/304c1eba0f9e3766918c891c0dc954639bf0857670fcf83db3dfc3606bcd6f38/hosts",
	        "LogPath": "/var/lib/docker/containers/304c1eba0f9e3766918c891c0dc954639bf0857670fcf83db3dfc3606bcd6f38/304c1eba0f9e3766918c891c0dc954639bf0857670fcf83db3dfc3606bcd6f38-json.log",
	        "Name": "/pause-318455",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-318455:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "pause-318455",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "304c1eba0f9e3766918c891c0dc954639bf0857670fcf83db3dfc3606bcd6f38",
	                "LowerDir": "/var/lib/docker/overlay2/c535588bcfcfb7d693443818b6d4547a101db2c4163e22397e31a2b68e4f3fcf-init/diff:/var/lib/docker/overlay2/29727d664a8119dcd8d22d923cfdfa7d86f99088879bf2a113d907b51116eb38/diff",
	                "MergedDir": "/var/lib/docker/overlay2/c535588bcfcfb7d693443818b6d4547a101db2c4163e22397e31a2b68e4f3fcf/merged",
	                "UpperDir": "/var/lib/docker/overlay2/c535588bcfcfb7d693443818b6d4547a101db2c4163e22397e31a2b68e4f3fcf/diff",
	                "WorkDir": "/var/lib/docker/overlay2/c535588bcfcfb7d693443818b6d4547a101db2c4163e22397e31a2b68e4f3fcf/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-318455",
	                "Source": "/var/lib/docker/volumes/pause-318455/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-318455",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-318455",
	                "name.minikube.sigs.k8s.io": "pause-318455",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "5fb39d8d518d49343fdf2bee5f83efd7757446d2cb2a8f386a55388e3b212d7b",
	            "SandboxKey": "/var/run/docker/netns/5fb39d8d518d",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33353"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33354"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33357"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33355"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33356"
	                    }
	                ]
	            },
	            "Networks": {
	                "pause-318455": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "8206734db8de825ac49e3419599fccb4210ea5530cc02084df1f155f4c026ac7",
	                    "EndpointID": "39ea0553221cbc9a18696672fcfa59a451e7db573594b278557d672e02c603b8",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "MacAddress": "d6:0d:88:f7:63:02",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-318455",
	                        "304c1eba0f9e"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-318455 -n pause-318455
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p pause-318455 -n pause-318455: exit status 2 (358.141154ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p pause-318455 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p pause-318455 logs -n 25: (1.059192778s)
helpers_test.go:261: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬─────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                            ARGS                                                             │           PROFILE           │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼─────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ stop    │ -p scheduled-stop-197684 --schedule 15s -v=5 --alsologtostderr                                                              │ scheduled-stop-197684       │ jenkins │ v1.37.0 │ 17 Dec 25 19:54 UTC │                     │
	│ stop    │ -p scheduled-stop-197684 --schedule 15s -v=5 --alsologtostderr                                                              │ scheduled-stop-197684       │ jenkins │ v1.37.0 │ 17 Dec 25 19:54 UTC │                     │
	│ stop    │ -p scheduled-stop-197684 --schedule 15s -v=5 --alsologtostderr                                                              │ scheduled-stop-197684       │ jenkins │ v1.37.0 │ 17 Dec 25 19:54 UTC │                     │
	│ stop    │ -p scheduled-stop-197684 --cancel-scheduled                                                                                 │ scheduled-stop-197684       │ jenkins │ v1.37.0 │ 17 Dec 25 19:54 UTC │ 17 Dec 25 19:54 UTC │
	│ stop    │ -p scheduled-stop-197684 --schedule 15s -v=5 --alsologtostderr                                                              │ scheduled-stop-197684       │ jenkins │ v1.37.0 │ 17 Dec 25 19:55 UTC │                     │
	│ stop    │ -p scheduled-stop-197684 --schedule 15s -v=5 --alsologtostderr                                                              │ scheduled-stop-197684       │ jenkins │ v1.37.0 │ 17 Dec 25 19:55 UTC │                     │
	│ stop    │ -p scheduled-stop-197684 --schedule 15s -v=5 --alsologtostderr                                                              │ scheduled-stop-197684       │ jenkins │ v1.37.0 │ 17 Dec 25 19:55 UTC │ 17 Dec 25 19:55 UTC │
	│ delete  │ -p scheduled-stop-197684                                                                                                    │ scheduled-stop-197684       │ jenkins │ v1.37.0 │ 17 Dec 25 19:55 UTC │ 17 Dec 25 19:55 UTC │
	│ start   │ -p insufficient-storage-455834 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio            │ insufficient-storage-455834 │ jenkins │ v1.37.0 │ 17 Dec 25 19:55 UTC │                     │
	│ delete  │ -p insufficient-storage-455834                                                                                              │ insufficient-storage-455834 │ jenkins │ v1.37.0 │ 17 Dec 25 19:56 UTC │ 17 Dec 25 19:56 UTC │
	│ start   │ -p force-systemd-env-335995 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                  │ force-systemd-env-335995    │ jenkins │ v1.37.0 │ 17 Dec 25 19:56 UTC │ 17 Dec 25 19:56 UTC │
	│ start   │ -p pause-318455 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio                   │ pause-318455                │ jenkins │ v1.37.0 │ 17 Dec 25 19:56 UTC │ 17 Dec 25 19:57 UTC │
	│ start   │ -p offline-crio-299824 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=crio           │ offline-crio-299824         │ jenkins │ v1.37.0 │ 17 Dec 25 19:56 UTC │ 17 Dec 25 19:57 UTC │
	│ start   │ -p stopped-upgrade-321305 --memory=3072 --vm-driver=docker  --container-runtime=crio                                        │ stopped-upgrade-321305      │ jenkins │ v1.35.0 │ 17 Dec 25 19:56 UTC │ 17 Dec 25 19:56 UTC │
	│ delete  │ -p force-systemd-env-335995                                                                                                 │ force-systemd-env-335995    │ jenkins │ v1.37.0 │ 17 Dec 25 19:56 UTC │ 17 Dec 25 19:56 UTC │
	│ start   │ -p running-upgrade-827750 --memory=3072 --vm-driver=docker  --container-runtime=crio                                        │ running-upgrade-827750      │ jenkins │ v1.35.0 │ 17 Dec 25 19:56 UTC │ 17 Dec 25 19:57 UTC │
	│ stop    │ stopped-upgrade-321305 stop                                                                                                 │ stopped-upgrade-321305      │ jenkins │ v1.35.0 │ 17 Dec 25 19:56 UTC │ 17 Dec 25 19:56 UTC │
	│ start   │ -p stopped-upgrade-321305 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                    │ stopped-upgrade-321305      │ jenkins │ v1.37.0 │ 17 Dec 25 19:56 UTC │ 17 Dec 25 19:57 UTC │
	│ delete  │ -p offline-crio-299824                                                                                                      │ offline-crio-299824         │ jenkins │ v1.37.0 │ 17 Dec 25 19:57 UTC │ 17 Dec 25 19:57 UTC │
	│ start   │ -p cert-expiration-059470 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                      │ cert-expiration-059470      │ jenkins │ v1.37.0 │ 17 Dec 25 19:57 UTC │                     │
	│ start   │ -p pause-318455 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                            │ pause-318455                │ jenkins │ v1.37.0 │ 17 Dec 25 19:57 UTC │ 17 Dec 25 19:57 UTC │
	│ start   │ -p running-upgrade-827750 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                    │ running-upgrade-827750      │ jenkins │ v1.37.0 │ 17 Dec 25 19:57 UTC │                     │
	│ delete  │ -p stopped-upgrade-321305                                                                                                   │ stopped-upgrade-321305      │ jenkins │ v1.37.0 │ 17 Dec 25 19:57 UTC │ 17 Dec 25 19:57 UTC │
	│ start   │ -p force-systemd-flag-134068 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio │ force-systemd-flag-134068   │ jenkins │ v1.37.0 │ 17 Dec 25 19:57 UTC │                     │
	│ pause   │ -p pause-318455 --alsologtostderr -v=5                                                                                      │ pause-318455                │ jenkins │ v1.37.0 │ 17 Dec 25 19:57 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴─────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/17 19:57:14
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1217 19:57:14.036479  580641 out.go:360] Setting OutFile to fd 1 ...
	I1217 19:57:14.036799  580641 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 19:57:14.036811  580641 out.go:374] Setting ErrFile to fd 2...
	I1217 19:57:14.036815  580641 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 19:57:14.037068  580641 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22186-372245/.minikube/bin
	I1217 19:57:14.037557  580641 out.go:368] Setting JSON to false
	I1217 19:57:14.038736  580641 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":5985,"bootTime":1765995449,"procs":297,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1217 19:57:14.038794  580641 start.go:143] virtualization: kvm guest
	I1217 19:57:14.040998  580641 out.go:179] * [force-systemd-flag-134068] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1217 19:57:14.042538  580641 out.go:179]   - MINIKUBE_LOCATION=22186
	I1217 19:57:14.042622  580641 notify.go:221] Checking for updates...
	I1217 19:57:14.045228  580641 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 19:57:14.046754  580641 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22186-372245/kubeconfig
	I1217 19:57:14.048123  580641 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22186-372245/.minikube
	I1217 19:57:14.049411  580641 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1217 19:57:14.050996  580641 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 19:57:14.053254  580641 config.go:182] Loaded profile config "cert-expiration-059470": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 19:57:14.053471  580641 config.go:182] Loaded profile config "pause-318455": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 19:57:14.053590  580641 config.go:182] Loaded profile config "running-upgrade-827750": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.32.0
	I1217 19:57:14.053720  580641 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 19:57:14.081358  580641 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1217 19:57:14.081478  580641 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 19:57:14.145067  580641 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:77 SystemTime:2025-12-17 19:57:14.133215392 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1217 19:57:14.145248  580641 docker.go:319] overlay module found
	I1217 19:57:14.150539  580641 out.go:179] * Using the docker driver based on user configuration
	I1217 19:57:14.151806  580641 start.go:309] selected driver: docker
	I1217 19:57:14.151827  580641 start.go:927] validating driver "docker" against <nil>
	I1217 19:57:14.151848  580641 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 19:57:14.152514  580641 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 19:57:14.219248  580641 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:77 SystemTime:2025-12-17 19:57:14.209493418 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1217 19:57:14.219448  580641 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1217 19:57:14.219698  580641 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1217 19:57:14.221600  580641 out.go:179] * Using Docker driver with root privileges
	I1217 19:57:14.223164  580641 cni.go:84] Creating CNI manager for ""
	I1217 19:57:14.223239  580641 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1217 19:57:14.223254  580641 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1217 19:57:14.223375  580641 start.go:353] cluster config:
	{Name:force-systemd-flag-134068 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:force-systemd-flag-134068 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluste
r.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 19:57:14.224745  580641 out.go:179] * Starting "force-systemd-flag-134068" primary control-plane node in "force-systemd-flag-134068" cluster
	I1217 19:57:14.226016  580641 cache.go:134] Beginning downloading kic base image for docker with crio
	I1217 19:57:14.227425  580641 out.go:179] * Pulling base image v0.0.48-1765966054-22186 ...
	I1217 19:57:14.228707  580641 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1217 19:57:14.228752  580641 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22186-372245/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4
	I1217 19:57:14.228763  580641 cache.go:65] Caching tarball of preloaded images
	I1217 19:57:14.228846  580641 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 in local docker daemon
	I1217 19:57:14.228913  580641 preload.go:238] Found /home/jenkins/minikube-integration/22186-372245/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1217 19:57:14.228929  580641 cache.go:68] Finished verifying existence of preloaded tar for v1.34.3 on crio
	I1217 19:57:14.229055  580641 profile.go:143] Saving config to /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/force-systemd-flag-134068/config.json ...
	I1217 19:57:14.229118  580641 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/force-systemd-flag-134068/config.json: {Name:mkd1292bf2c40cbf6298cfdeb86e55351afaaef7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 19:57:14.253559  580641 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 in local docker daemon, skipping pull
	I1217 19:57:14.253583  580641 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 exists in daemon, skipping load
	I1217 19:57:14.253604  580641 cache.go:243] Successfully downloaded all kic artifacts
	I1217 19:57:14.253642  580641 start.go:360] acquireMachinesLock for force-systemd-flag-134068: {Name:mk85e60e74f50e12c3ce481cea309b0a4fa323d3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 19:57:14.253765  580641 start.go:364] duration metric: took 100.493µs to acquireMachinesLock for "force-systemd-flag-134068"
	I1217 19:57:14.253830  580641 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-134068 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:force-systemd-flag-134068 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1217 19:57:14.253936  580641 start.go:125] createHost starting for "" (driver="docker")
	I1217 19:57:13.451883  578056 cli_runner.go:164] Run: docker network inspect running-upgrade-827750 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1217 19:57:13.473013  578056 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1217 19:57:13.478151  578056 kubeadm.go:884] updating cluster {Name:running-upgrade-827750 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:running-upgrade-827750 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMn
etClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1217 19:57:13.478289  578056 preload.go:188] Checking if preload exists for k8s version v1.32.0 and runtime crio
	I1217 19:57:13.478357  578056 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 19:57:13.530315  578056 crio.go:514] all images are preloaded for cri-o runtime.
	I1217 19:57:13.530342  578056 crio.go:433] Images already preloaded, skipping extraction
	I1217 19:57:13.530398  578056 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 19:57:13.572774  578056 crio.go:514] all images are preloaded for cri-o runtime.
	I1217 19:57:13.572801  578056 cache_images.go:86] Images are preloaded, skipping loading
	I1217 19:57:13.572809  578056 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.32.0 crio true true} ...
	I1217 19:57:13.572946  578056 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=running-upgrade-827750 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.0 ClusterName:running-upgrade-827750 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1217 19:57:13.573030  578056 ssh_runner.go:195] Run: crio config
	I1217 19:57:13.633742  578056 cni.go:84] Creating CNI manager for ""
	I1217 19:57:13.633768  578056 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1217 19:57:13.633789  578056 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1217 19:57:13.633821  578056 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.32.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:running-upgrade-827750 NodeName:running-upgrade-827750 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPod
Path:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1217 19:57:13.634027  578056 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "running-upgrade-827750"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.32.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1217 19:57:13.634122  578056 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.0
	I1217 19:57:13.644377  578056 binaries.go:51] Found k8s binaries, skipping transfer
	I1217 19:57:13.644452  578056 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1217 19:57:13.654572  578056 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1217 19:57:13.677302  578056 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1217 19:57:13.700305  578056 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2218 bytes)
	I1217 19:57:13.723176  578056 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1217 19:57:13.727805  578056 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 19:57:13.841125  578056 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 19:57:13.855195  578056 certs.go:69] Setting up /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/running-upgrade-827750 for IP: 192.168.76.2
	I1217 19:57:13.855222  578056 certs.go:195] generating shared ca certs ...
	I1217 19:57:13.855242  578056 certs.go:227] acquiring lock for ca certs: {Name:mk6c0a4a99609de13fb0b54aca94f9165cc7856c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 19:57:13.855424  578056 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22186-372245/.minikube/ca.key
	I1217 19:57:13.855500  578056 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22186-372245/.minikube/proxy-client-ca.key
	I1217 19:57:13.855519  578056 certs.go:257] generating profile certs ...
	I1217 19:57:13.855629  578056 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/running-upgrade-827750/client.key
	I1217 19:57:13.855689  578056 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/running-upgrade-827750/apiserver.key.f9167027
	I1217 19:57:13.855742  578056 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/running-upgrade-827750/proxy-client.key
	I1217 19:57:13.855911  578056 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-372245/.minikube/certs/375797.pem (1338 bytes)
	W1217 19:57:13.855953  578056 certs.go:480] ignoring /home/jenkins/minikube-integration/22186-372245/.minikube/certs/375797_empty.pem, impossibly tiny 0 bytes
	I1217 19:57:13.855968  578056 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-372245/.minikube/certs/ca-key.pem (1675 bytes)
	I1217 19:57:13.855995  578056 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-372245/.minikube/certs/ca.pem (1082 bytes)
	I1217 19:57:13.856020  578056 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-372245/.minikube/certs/cert.pem (1123 bytes)
	I1217 19:57:13.856046  578056 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-372245/.minikube/certs/key.pem (1675 bytes)
	I1217 19:57:13.856121  578056 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-372245/.minikube/files/etc/ssl/certs/3757972.pem (1708 bytes)
	I1217 19:57:13.857148  578056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1217 19:57:13.887842  578056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1217 19:57:13.919670  578056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1217 19:57:13.948683  578056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1217 19:57:13.977444  578056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/running-upgrade-827750/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1217 19:57:14.006333  578056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/running-upgrade-827750/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1217 19:57:14.035253  578056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/running-upgrade-827750/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1217 19:57:14.062472  578056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/running-upgrade-827750/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1217 19:57:14.092023  578056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/files/etc/ssl/certs/3757972.pem --> /usr/share/ca-certificates/3757972.pem (1708 bytes)
	I1217 19:57:14.128338  578056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 19:57:14.158701  578056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/certs/375797.pem --> /usr/share/ca-certificates/375797.pem (1338 bytes)
	I1217 19:57:14.192990  578056 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1217 19:57:14.215335  578056 ssh_runner.go:195] Run: openssl version
	I1217 19:57:14.221783  578056 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3757972.pem
	I1217 19:57:14.231695  578056 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3757972.pem /etc/ssl/certs/3757972.pem
	I1217 19:57:14.241465  578056 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3757972.pem
	I1217 19:57:14.246614  578056 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 17 19:32 /usr/share/ca-certificates/3757972.pem
	I1217 19:57:14.246676  578056 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3757972.pem
	I1217 19:57:14.255254  578056 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1217 19:57:14.266427  578056 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1217 19:57:14.276231  578056 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1217 19:57:14.287003  578056 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1217 19:57:14.291982  578056 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 19:24 /usr/share/ca-certificates/minikubeCA.pem
	I1217 19:57:14.292058  578056 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1217 19:57:14.300601  578056 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1217 19:57:14.311459  578056 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/375797.pem
	I1217 19:57:14.322456  578056 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/375797.pem /etc/ssl/certs/375797.pem
	I1217 19:57:14.332980  578056 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/375797.pem
	I1217 19:57:14.336972  578056 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 17 19:32 /usr/share/ca-certificates/375797.pem
	I1217 19:57:14.337029  578056 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/375797.pem
	I1217 19:57:14.345526  578056 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1217 19:57:14.358223  578056 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1217 19:57:14.363001  578056 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1217 19:57:14.370802  578056 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1217 19:57:14.379650  578056 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1217 19:57:14.388002  578056 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1217 19:57:14.396034  578056 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1217 19:57:14.404970  578056 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1217 19:57:14.412958  578056 kubeadm.go:401] StartCluster: {Name:running-upgrade-827750 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.46@sha256:fd2d445ddcc33ebc5c6b68a17e6219ea207ce63c005095ea1525296da2d1a279 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.0 ClusterName:running-upgrade-827750 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] AP
IServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.32.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetC
lientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 19:57:14.413189  578056 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1217 19:57:14.413251  578056 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 19:57:14.461217  578056 cri.go:89] found id: "c9d4d725fcf2d281bd3fb53d8e94cb9634c4b8181bae7de59157c344d9bf4b30"
	I1217 19:57:14.461242  578056 cri.go:89] found id: "eae68eec6ef22c52a261146bfe03bc628cc599576c2d3a98e41f953a5b7891d2"
	I1217 19:57:14.461250  578056 cri.go:89] found id: "67508bb6df0115cfc93f7ac49ab96b029831c7d11d88227acf179d54da743ee9"
	I1217 19:57:14.461254  578056 cri.go:89] found id: "5116728583f140952afb3a56a2abd06655229ce18504b3e9c2ab29f962468de8"
	I1217 19:57:14.461259  578056 cri.go:89] found id: "ed0b2dba94e66a9279b817134e5b8ed559668ef50c6b6a01f82bca41f61dfa2a"
	I1217 19:57:14.461264  578056 cri.go:89] found id: ""
	I1217 19:57:14.461310  578056 ssh_runner.go:195] Run: sudo runc list -f json
	I1217 19:57:14.487065  578056 cri.go:116] JSON = [{"ociVersion":"1.0.2-dev","id":"5116728583f140952afb3a56a2abd06655229ce18504b3e9c2ab29f962468de8","pid":1407,"status":"running","bundle":"/run/containers/storage/overlay-containers/5116728583f140952afb3a56a2abd06655229ce18504b3e9c2ab29f962468de8/userdata","rootfs":"/var/lib/containers/storage/overlay/9b3c150ca7bcba5bf34f331a576916ee09f7697cc3902c309e4d8014670025f3/merged","created":"2025-12-17T19:57:01.521248761Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"99f3a73e","io.kubernetes.container.name":"kube-controller-manager","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"99f3a73e\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.containe
r.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"5116728583f140952afb3a56a2abd06655229ce18504b3e9c2ab29f962468de8","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2025-12-17T19:57:01.441006787Z","io.kubernetes.cri-o.Image":"8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3","io.kubernetes.cri-o.ImageName":"registry.k8s.io/kube-controller-manager:v1.32.0","io.kubernetes.cri-o.ImageRef":"8cab3d2a8bd0fe4127810f35afe0ffd42bfe75b2a4712a84da5595d4bde617d3","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-controller-manager\",\"io.kubernetes.pod.name\":\"kube-controller-manager-running-upgrade-827750\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"be85137664fa759b16844d75be32d27d\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-controller-manager-running-upgrade-827750_be85137664fa759b16844d75be32d27d/kube-controller-manager/0
.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-controller-manager\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/9b3c150ca7bcba5bf34f331a576916ee09f7697cc3902c309e4d8014670025f3/merged","io.kubernetes.cri-o.Name":"k8s_kube-controller-manager_kube-controller-manager-running-upgrade-827750_kube-system_be85137664fa759b16844d75be32d27d_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/04155491f8ce28a7a0f843c979a2bc8734a6a1ca33a9b4cbf939cde42626a0de/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"04155491f8ce28a7a0f843c979a2bc8734a6a1ca33a9b4cbf939cde42626a0de","io.kubernetes.cri-o.SandboxName":"k8s_kube-controller-manager-running-upgrade-827750_kube-system_be85137664fa759b16844d75be32d27d_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/ca-certificates\",\"host_path\":\"/etc/ca-
certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/be85137664fa759b16844d75be32d27d/containers/kube-controller-manager/ea608ad0\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/be85137664fa759b16844d75be32d27d/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/ssl/certs\",\"host_path\":\"/etc/ssl/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/kubernetes/controller-manager.conf\",\"host_path\":\"/etc/kubernetes/controller-manager.conf\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/share/ca-certificates\",\"host_path\":\"/usr/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/certs\",\"host_path
\":\"/var/lib/minikube/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/local/share/ca-certificates\",\"host_path\":\"/usr/local/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/libexec/kubernetes/kubelet-plugins/volume/exec\",\"host_path\":\"/usr/libexec/kubernetes/kubelet-plugins/volume/exec\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-controller-manager-running-upgrade-827750","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"be85137664fa759b16844d75be32d27d","kubernetes.io/config.hash":"be85137664fa759b16844d75be32d27d","kubernetes.io/config.seen":"2025-12-17T19:57:00.936598074Z","kubernetes.io/config.source":"file","org.systemd.property.After":"['crio.service']","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.DefaultDependencies":"true","org.
systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"67508bb6df0115cfc93f7ac49ab96b029831c7d11d88227acf179d54da743ee9","pid":1414,"status":"running","bundle":"/run/containers/storage/overlay-containers/67508bb6df0115cfc93f7ac49ab96b029831c7d11d88227acf179d54da743ee9/userdata","rootfs":"/var/lib/containers/storage/overlay/56b437f0a21804dfd160c6f79b644b403fa410251e1dc64ca10c54f8c3eba1a0/merged","created":"2025-12-17T19:57:01.51538618Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"8c4b12d6","io.kubernetes.container.name":"kube-scheduler","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"8c4b12d6\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.containe
r.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"67508bb6df0115cfc93f7ac49ab96b029831c7d11d88227acf179d54da743ee9","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2025-12-17T19:57:01.455514784Z","io.kubernetes.cri-o.Image":"a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5","io.kubernetes.cri-o.ImageName":"registry.k8s.io/kube-scheduler:v1.32.0","io.kubernetes.cri-o.ImageRef":"a389e107f4ff1130c69849f0af08cbce9a1dfe3b7c39874012587d233807cfc5","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-scheduler\",\"io.kubernetes.pod.name\":\"kube-scheduler-running-upgrade-827750\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"4527a95fbd358783b084b49b25a105e8\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-scheduler-running-upgrade-827750_4527a95fbd358783b084b49b25a105e8/kube-scheduler/0.log","io.kubernetes.cri-o.Metadata":"{\"name
\":\"kube-scheduler\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/56b437f0a21804dfd160c6f79b644b403fa410251e1dc64ca10c54f8c3eba1a0/merged","io.kubernetes.cri-o.Name":"k8s_kube-scheduler_kube-scheduler-running-upgrade-827750_kube-system_4527a95fbd358783b084b49b25a105e8_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/7dd109e08d775d70a4c4f1efae614567783812d56fe4f3a1c673bb2cda2e93b7/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"7dd109e08d775d70a4c4f1efae614567783812d56fe4f3a1c673bb2cda2e93b7","io.kubernetes.cri-o.SandboxName":"k8s_kube-scheduler-running-upgrade-827750_kube-system_4527a95fbd358783b084b49b25a105e8_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/4527a95fbd358783b084b49b25a105e8/etc-hosts\",\"readonly\":false,\"propagation
\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/4527a95fbd358783b084b49b25a105e8/containers/kube-scheduler/7624139d\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/kubernetes/scheduler.conf\",\"host_path\":\"/etc/kubernetes/scheduler.conf\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-scheduler-running-upgrade-827750","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"4527a95fbd358783b084b49b25a105e8","kubernetes.io/config.hash":"4527a95fbd358783b084b49b25a105e8","kubernetes.io/config.seen":"2025-12-17T19:57:00.936599561Z","kubernetes.io/config.source":"file","org.systemd.property.After":"['crio.service']","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.DefaultDependencies":"true","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"}
,{"ociVersion":"1.0.2-dev","id":"c9d4d725fcf2d281bd3fb53d8e94cb9634c4b8181bae7de59157c344d9bf4b30","pid":1910,"status":"running","bundle":"/run/containers/storage/overlay-containers/c9d4d725fcf2d281bd3fb53d8e94cb9634c4b8181bae7de59157c344d9bf4b30/userdata","rootfs":"/var/lib/containers/storage/overlay/028142cc41fbc0d0a6c59ce389a5bc6666d0756a0890e0f03a29a73f2ae979f1/merged","created":"2025-12-17T19:57:12.059624263Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"6c6bf961","io.kubernetes.container.name":"storage-provisioner","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"6c6bf961\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.term
inationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"c9d4d725fcf2d281bd3fb53d8e94cb9634c4b8181bae7de59157c344d9bf4b30","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2025-12-17T19:57:12.008374554Z","io.kubernetes.cri-o.Image":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","io.kubernetes.cri-o.ImageName":"gcr.io/k8s-minikube/storage-provisioner:v5","io.kubernetes.cri-o.ImageRef":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"storage-provisioner\",\"io.kubernetes.pod.name\":\"storage-provisioner\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"cf29c230-6fc0-49cc-9bb0-b76255ee79b3\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_storage-provisioner_cf29c230-6fc0-49cc-9bb0-b76255ee79b3/storage-provisioner/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"storage-provisioner\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/cont
ainers/storage/overlay/028142cc41fbc0d0a6c59ce389a5bc6666d0756a0890e0f03a29a73f2ae979f1/merged","io.kubernetes.cri-o.Name":"k8s_storage-provisioner_storage-provisioner_kube-system_cf29c230-6fc0-49cc-9bb0-b76255ee79b3_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/30bcbbd7661a6c026db69a4c64da3f0835488ae1ae8497663e7619d9e66555b0/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"30bcbbd7661a6c026db69a4c64da3f0835488ae1ae8497663e7619d9e66555b0","io.kubernetes.cri-o.SandboxName":"k8s_storage-provisioner_kube-system_cf29c230-6fc0-49cc-9bb0-b76255ee79b3_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/tmp\",\"host_path\":\"/tmp\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/cf29c230-6fc0-49cc-9bb0-b76255ee79b3/etc-hosts\",\"read
only\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/cf29c230-6fc0-49cc-9bb0-b76255ee79b3/containers/storage-provisioner/71cb6206\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"host_path\":\"/var/lib/kubelet/pods/cf29c230-6fc0-49cc-9bb0-b76255ee79b3/volumes/kubernetes.io~projected/kube-api-access-vc2fd\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"storage-provisioner","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"cf29c230-6fc0-49cc-9bb0-b76255ee79b3","kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"name
space\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n","kubernetes.io/config.seen":"2025-12-17T19:57:11.669095716Z","kubernetes.io/config.source":"api","org.systemd.property.After":"['crio.service']","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.DefaultDependencies":"true","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"eae68eec6ef22c52a261146bfe03bc628cc599576c2d3a98e41f953a5b7891d2","pid":1423,"status":"running","bundle":"/run/containers/storage/overlay-containers/eae68eec6ef22c52a261146bfe03bc628cc599576c2d3a98e41f953a5b7891d2/userdata
","rootfs":"/var/lib/containers/storage/overlay/c636a31b83bb618efd1d6c576f6a7a264ec9e7e9fd1eba071e32b64d8a8f0963/merged","created":"2025-12-17T19:57:01.523735873Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"e68be80f","io.kubernetes.container.name":"etcd","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"e68be80f\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"eae68eec6ef22c52a261146bfe03bc628cc599576c2d3a98e41f953a5b7891d2","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2025-12-17T19:57:01.45831477Z","io.kubernetes.cri-o.Image":"
a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc","io.kubernetes.cri-o.ImageName":"registry.k8s.io/etcd:3.5.16-0","io.kubernetes.cri-o.ImageRef":"a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"etcd\",\"io.kubernetes.pod.name\":\"etcd-running-upgrade-827750\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"c79baa9b7a3546eed685736683689cae\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_etcd-running-upgrade-827750_c79baa9b7a3546eed685736683689cae/etcd/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"etcd\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/c636a31b83bb618efd1d6c576f6a7a264ec9e7e9fd1eba071e32b64d8a8f0963/merged","io.kubernetes.cri-o.Name":"k8s_etcd_etcd-running-upgrade-827750_kube-system_c79baa9b7a3546eed685736683689cae_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/f2e75ac7e7bb3e64d590e3827d990c543b8
6b5bf93057e43d8b8610ed4d80972/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"f2e75ac7e7bb3e64d590e3827d990c543b86b5bf93057e43d8b8610ed4d80972","io.kubernetes.cri-o.SandboxName":"k8s_etcd-running-upgrade-827750_kube-system_c79baa9b7a3546eed685736683689cae_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/c79baa9b7a3546eed685736683689cae/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/c79baa9b7a3546eed685736683689cae/containers/etcd/176c49cf\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/etcd\",\"host_path\":\"/var/lib/minikube/etcd\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/
minikube/certs/etcd\",\"host_path\":\"/var/lib/minikube/certs/etcd\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"etcd-running-upgrade-827750","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"c79baa9b7a3546eed685736683689cae","kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.76.2:2379","kubernetes.io/config.hash":"c79baa9b7a3546eed685736683689cae","kubernetes.io/config.seen":"2025-12-17T19:57:00.936601021Z","kubernetes.io/config.source":"file","org.systemd.property.After":"['crio.service']","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.DefaultDependencies":"true","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"ed0b2dba94e66a9279b817134e5b8ed559668ef50c6b6a01f82bca41f61dfa2a","pid":1400,"status":"running","bundle":"/run/containers/storage/overlay-containers/ed0b2dba94e66a9279b817134e5b8ed5
59668ef50c6b6a01f82bca41f61dfa2a/userdata","rootfs":"/var/lib/containers/storage/overlay/9441aba44a344b14388881f7531a5dcb1eae6023ba21115c59344881096e0143/merged","created":"2025-12-17T19:57:01.510186854Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"bf915d6a","io.kubernetes.container.name":"kube-apiserver","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"bf915d6a\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"ed0b2dba94e66a9279b817134e5b8ed559668ef50c6b6a01f82bca41f61dfa2a","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2025-12-1
7T19:57:01.42715216Z","io.kubernetes.cri-o.Image":"c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4","io.kubernetes.cri-o.ImageName":"registry.k8s.io/kube-apiserver:v1.32.0","io.kubernetes.cri-o.ImageRef":"c2e17b8d0f4a39ed32f1c1fd4eb408627c94111ae9a46c2034758e4ced4f79c4","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-apiserver\",\"io.kubernetes.pod.name\":\"kube-apiserver-running-upgrade-827750\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"5751ebc3794da7e9b32cdcff3cdc6826\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-apiserver-running-upgrade-827750_5751ebc3794da7e9b32cdcff3cdc6826/kube-apiserver/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-apiserver\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/9441aba44a344b14388881f7531a5dcb1eae6023ba21115c59344881096e0143/merged","io.kubernetes.cri-o.Name":"k8s_kube-apiserver_kube-apiserver-running-upgrade-827750_kube-system_5751ebc3794da7e9b3
2cdcff3cdc6826_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/152d9bf95d29f3a81eb7d6f2bad2a49cad2bbbc511b7bd30e8ee43bdf3cf5e7c/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"152d9bf95d29f3a81eb7d6f2bad2a49cad2bbbc511b7bd30e8ee43bdf3cf5e7c","io.kubernetes.cri-o.SandboxName":"k8s_kube-apiserver-running-upgrade-827750_kube-system_5751ebc3794da7e9b32cdcff3cdc6826_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/5751ebc3794da7e9b32cdcff3cdc6826/containers/kube-apiserver/36fd9baf\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/ca-certificates\",\"host_path\":\"/etc/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/
pods/5751ebc3794da7e9b32cdcff3cdc6826/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/share/ca-certificates\",\"host_path\":\"/usr/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/ssl/certs\",\"host_path\":\"/etc/ssl/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/certs\",\"host_path\":\"/var/lib/minikube/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/local/share/ca-certificates\",\"host_path\":\"/usr/local/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-apiserver-running-upgrade-827750","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"5751ebc3794da7e9b32cdcff3cdc6826","kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168
.76.2:8443","kubernetes.io/config.hash":"5751ebc3794da7e9b32cdcff3cdc6826","kubernetes.io/config.seen":"2025-12-17T19:57:00.936593679Z","kubernetes.io/config.source":"file","org.systemd.property.After":"['crio.service']","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.DefaultDependencies":"true","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"}]
	I1217 19:57:14.487379  578056 cri.go:126] list returned 5 containers
	I1217 19:57:14.487395  578056 cri.go:129] container: {ID:5116728583f140952afb3a56a2abd06655229ce18504b3e9c2ab29f962468de8 Status:running}
	I1217 19:57:14.487429  578056 cri.go:135] skipping {5116728583f140952afb3a56a2abd06655229ce18504b3e9c2ab29f962468de8 running}: state = "running", want "paused"
	I1217 19:57:14.487439  578056 cri.go:129] container: {ID:67508bb6df0115cfc93f7ac49ab96b029831c7d11d88227acf179d54da743ee9 Status:running}
	I1217 19:57:14.487444  578056 cri.go:135] skipping {67508bb6df0115cfc93f7ac49ab96b029831c7d11d88227acf179d54da743ee9 running}: state = "running", want "paused"
	I1217 19:57:14.487451  578056 cri.go:129] container: {ID:c9d4d725fcf2d281bd3fb53d8e94cb9634c4b8181bae7de59157c344d9bf4b30 Status:running}
	I1217 19:57:14.487456  578056 cri.go:135] skipping {c9d4d725fcf2d281bd3fb53d8e94cb9634c4b8181bae7de59157c344d9bf4b30 running}: state = "running", want "paused"
	I1217 19:57:14.487463  578056 cri.go:129] container: {ID:eae68eec6ef22c52a261146bfe03bc628cc599576c2d3a98e41f953a5b7891d2 Status:running}
	I1217 19:57:14.487468  578056 cri.go:135] skipping {eae68eec6ef22c52a261146bfe03bc628cc599576c2d3a98e41f953a5b7891d2 running}: state = "running", want "paused"
	I1217 19:57:14.487476  578056 cri.go:129] container: {ID:ed0b2dba94e66a9279b817134e5b8ed559668ef50c6b6a01f82bca41f61dfa2a Status:running}
	I1217 19:57:14.487480  578056 cri.go:135] skipping {ed0b2dba94e66a9279b817134e5b8ed559668ef50c6b6a01f82bca41f61dfa2a running}: state = "running", want "paused"
	I1217 19:57:14.487525  578056 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1217 19:57:14.498243  578056 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1217 19:57:14.498270  578056 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1217 19:57:14.498334  578056 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1217 19:57:14.508653  578056 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1217 19:57:14.509450  578056 kubeconfig.go:47] verify endpoint returned: get endpoint: "running-upgrade-827750" does not appear in /home/jenkins/minikube-integration/22186-372245/kubeconfig
	I1217 19:57:14.509763  578056 kubeconfig.go:62] /home/jenkins/minikube-integration/22186-372245/kubeconfig needs updating (will repair): [kubeconfig missing "running-upgrade-827750" cluster setting kubeconfig missing "running-upgrade-827750" context setting]
	I1217 19:57:14.510353  578056 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-372245/kubeconfig: {Name:mkbe8926b9014d2af611aee93b1188b72880b6c1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 19:57:14.511219  578056 kapi.go:59] client config for running-upgrade-827750: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22186-372245/.minikube/profiles/running-upgrade-827750/client.crt", KeyFile:"/home/jenkins/minikube-integration/22186-372245/.minikube/profiles/running-upgrade-827750/client.key", CAFile:"/home/jenkins/minikube-integration/22186-372245/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CADat
a:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2817500), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1217 19:57:14.511683  578056 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1217 19:57:14.511707  578056 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1217 19:57:14.511714  578056 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1217 19:57:14.511721  578056 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1217 19:57:14.511726  578056 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1217 19:57:14.512322  578056 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1217 19:57:14.523052  578056 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-12-17 19:56:56.880555879 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-12-17 19:57:13.719334428 +0000
	@@ -41,9 +41,6 @@
	 etcd:
	   local:
	     dataDir: /var/lib/minikube/etcd
	-    extraArgs:
	-      - name: "proxy-refresh-interval"
	-        value: "70000"
	 kubernetesVersion: v1.32.0
	 networking:
	   dnsDomain: cluster.local
	
	-- /stdout --
	I1217 19:57:14.523088  578056 kubeadm.go:1161] stopping kube-system containers ...
	I1217 19:57:14.523111  578056 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1217 19:57:14.523174  578056 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 19:57:14.563149  578056 cri.go:89] found id: "c9d4d725fcf2d281bd3fb53d8e94cb9634c4b8181bae7de59157c344d9bf4b30"
	I1217 19:57:14.563175  578056 cri.go:89] found id: "eae68eec6ef22c52a261146bfe03bc628cc599576c2d3a98e41f953a5b7891d2"
	I1217 19:57:14.563190  578056 cri.go:89] found id: "67508bb6df0115cfc93f7ac49ab96b029831c7d11d88227acf179d54da743ee9"
	I1217 19:57:14.563195  578056 cri.go:89] found id: "5116728583f140952afb3a56a2abd06655229ce18504b3e9c2ab29f962468de8"
	I1217 19:57:14.563200  578056 cri.go:89] found id: "ed0b2dba94e66a9279b817134e5b8ed559668ef50c6b6a01f82bca41f61dfa2a"
	I1217 19:57:14.563205  578056 cri.go:89] found id: ""
	I1217 19:57:14.563213  578056 cri.go:252] Stopping containers: [c9d4d725fcf2d281bd3fb53d8e94cb9634c4b8181bae7de59157c344d9bf4b30 eae68eec6ef22c52a261146bfe03bc628cc599576c2d3a98e41f953a5b7891d2 67508bb6df0115cfc93f7ac49ab96b029831c7d11d88227acf179d54da743ee9 5116728583f140952afb3a56a2abd06655229ce18504b3e9c2ab29f962468de8 ed0b2dba94e66a9279b817134e5b8ed559668ef50c6b6a01f82bca41f61dfa2a]
	I1217 19:57:14.563274  578056 ssh_runner.go:195] Run: which crictl
	I1217 19:57:14.567166  578056 ssh_runner.go:195] Run: sudo /usr/bin/crictl stop --timeout=10 c9d4d725fcf2d281bd3fb53d8e94cb9634c4b8181bae7de59157c344d9bf4b30 eae68eec6ef22c52a261146bfe03bc628cc599576c2d3a98e41f953a5b7891d2 67508bb6df0115cfc93f7ac49ab96b029831c7d11d88227acf179d54da743ee9 5116728583f140952afb3a56a2abd06655229ce18504b3e9c2ab29f962468de8 ed0b2dba94e66a9279b817134e5b8ed559668ef50c6b6a01f82bca41f61dfa2a
	I1217 19:57:13.545636  577087 addons.go:530] duration metric: took 9.870367ms for enable addons: enabled=[]
	I1217 19:57:13.545674  577087 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 19:57:13.672237  577087 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 19:57:13.687519  577087 node_ready.go:35] waiting up to 6m0s for node "pause-318455" to be "Ready" ...
	I1217 19:57:13.695681  577087 node_ready.go:49] node "pause-318455" is "Ready"
	I1217 19:57:13.695709  577087 node_ready.go:38] duration metric: took 8.152816ms for node "pause-318455" to be "Ready" ...
	I1217 19:57:13.695728  577087 api_server.go:52] waiting for apiserver process to appear ...
	I1217 19:57:13.695773  577087 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 19:57:13.709191  577087 api_server.go:72] duration metric: took 173.470144ms to wait for apiserver process to appear ...
	I1217 19:57:13.709221  577087 api_server.go:88] waiting for apiserver healthz status ...
	I1217 19:57:13.709245  577087 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1217 19:57:13.714667  577087 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1217 19:57:13.716322  577087 api_server.go:141] control plane version: v1.34.3
	I1217 19:57:13.716353  577087 api_server.go:131] duration metric: took 7.124457ms to wait for apiserver health ...
	I1217 19:57:13.716365  577087 system_pods.go:43] waiting for kube-system pods to appear ...
	I1217 19:57:13.720175  577087 system_pods.go:59] 7 kube-system pods found
	I1217 19:57:13.720222  577087 system_pods.go:61] "coredns-66bc5c9577-l2sfj" [01975478-9e8c-4475-b2d0-82166c6a60a4] Running
	I1217 19:57:13.720233  577087 system_pods.go:61] "etcd-pause-318455" [1bf76e9b-1fa4-4fff-a305-a3ee8a2f0655] Running
	I1217 19:57:13.720239  577087 system_pods.go:61] "kindnet-z5f74" [2db52b2f-fdbb-4ede-a88c-ca7bf3d7e916] Running
	I1217 19:57:13.720245  577087 system_pods.go:61] "kube-apiserver-pause-318455" [41734dc9-388e-44c1-8ce7-9e34ce94fef9] Running
	I1217 19:57:13.720252  577087 system_pods.go:61] "kube-controller-manager-pause-318455" [46dac58b-1c39-4c0d-bf34-75a3cf600307] Running
	I1217 19:57:13.720258  577087 system_pods.go:61] "kube-proxy-48bqr" [684ab215-b5a6-44fe-a4f6-fae57853d3c4] Running
	I1217 19:57:13.720266  577087 system_pods.go:61] "kube-scheduler-pause-318455" [db2bc6bb-b662-4150-8137-92a2657ea6a8] Running
	I1217 19:57:13.720276  577087 system_pods.go:74] duration metric: took 3.902559ms to wait for pod list to return data ...
	I1217 19:57:13.720289  577087 default_sa.go:34] waiting for default service account to be created ...
	I1217 19:57:13.722722  577087 default_sa.go:45] found service account: "default"
	I1217 19:57:13.722748  577087 default_sa.go:55] duration metric: took 2.447803ms for default service account to be created ...
	I1217 19:57:13.722759  577087 system_pods.go:116] waiting for k8s-apps to be running ...
	I1217 19:57:13.725579  577087 system_pods.go:86] 7 kube-system pods found
	I1217 19:57:13.725615  577087 system_pods.go:89] "coredns-66bc5c9577-l2sfj" [01975478-9e8c-4475-b2d0-82166c6a60a4] Running
	I1217 19:57:13.725623  577087 system_pods.go:89] "etcd-pause-318455" [1bf76e9b-1fa4-4fff-a305-a3ee8a2f0655] Running
	I1217 19:57:13.725629  577087 system_pods.go:89] "kindnet-z5f74" [2db52b2f-fdbb-4ede-a88c-ca7bf3d7e916] Running
	I1217 19:57:13.725635  577087 system_pods.go:89] "kube-apiserver-pause-318455" [41734dc9-388e-44c1-8ce7-9e34ce94fef9] Running
	I1217 19:57:13.725641  577087 system_pods.go:89] "kube-controller-manager-pause-318455" [46dac58b-1c39-4c0d-bf34-75a3cf600307] Running
	I1217 19:57:13.725647  577087 system_pods.go:89] "kube-proxy-48bqr" [684ab215-b5a6-44fe-a4f6-fae57853d3c4] Running
	I1217 19:57:13.725652  577087 system_pods.go:89] "kube-scheduler-pause-318455" [db2bc6bb-b662-4150-8137-92a2657ea6a8] Running
	I1217 19:57:13.725668  577087 system_pods.go:126] duration metric: took 2.900187ms to wait for k8s-apps to be running ...
	I1217 19:57:13.725681  577087 system_svc.go:44] waiting for kubelet service to be running ....
	I1217 19:57:13.725734  577087 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 19:57:13.741883  577087 system_svc.go:56] duration metric: took 16.190658ms WaitForService to wait for kubelet
	I1217 19:57:13.741918  577087 kubeadm.go:587] duration metric: took 206.204517ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1217 19:57:13.741942  577087 node_conditions.go:102] verifying NodePressure condition ...
	I1217 19:57:13.745275  577087 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1217 19:57:13.745308  577087 node_conditions.go:123] node cpu capacity is 8
	I1217 19:57:13.745328  577087 node_conditions.go:105] duration metric: took 3.379877ms to run NodePressure ...
	I1217 19:57:13.745345  577087 start.go:242] waiting for startup goroutines ...
	I1217 19:57:13.745358  577087 start.go:247] waiting for cluster config update ...
	I1217 19:57:13.745372  577087 start.go:256] writing updated cluster config ...
	I1217 19:57:13.745784  577087 ssh_runner.go:195] Run: rm -f paused
	I1217 19:57:13.750323  577087 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1217 19:57:13.751294  577087 kapi.go:59] client config for pause-318455: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22186-372245/.minikube/profiles/pause-318455/client.crt", KeyFile:"/home/jenkins/minikube-integration/22186-372245/.minikube/profiles/pause-318455/client.key", CAFile:"/home/jenkins/minikube-integration/22186-372245/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]s
tring(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2817500), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1217 19:57:13.754843  577087 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-l2sfj" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 19:57:13.760835  577087 pod_ready.go:94] pod "coredns-66bc5c9577-l2sfj" is "Ready"
	I1217 19:57:13.760863  577087 pod_ready.go:86] duration metric: took 5.992534ms for pod "coredns-66bc5c9577-l2sfj" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 19:57:13.763406  577087 pod_ready.go:83] waiting for pod "etcd-pause-318455" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 19:57:13.769010  577087 pod_ready.go:94] pod "etcd-pause-318455" is "Ready"
	I1217 19:57:13.769069  577087 pod_ready.go:86] duration metric: took 5.640147ms for pod "etcd-pause-318455" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 19:57:13.771486  577087 pod_ready.go:83] waiting for pod "kube-apiserver-pause-318455" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 19:57:13.775756  577087 pod_ready.go:94] pod "kube-apiserver-pause-318455" is "Ready"
	I1217 19:57:13.775779  577087 pod_ready.go:86] duration metric: took 4.269421ms for pod "kube-apiserver-pause-318455" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 19:57:13.777919  577087 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-318455" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 19:57:14.155717  577087 pod_ready.go:94] pod "kube-controller-manager-pause-318455" is "Ready"
	I1217 19:57:14.155753  577087 pod_ready.go:86] duration metric: took 377.806813ms for pod "kube-controller-manager-pause-318455" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 19:57:14.355787  577087 pod_ready.go:83] waiting for pod "kube-proxy-48bqr" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 19:57:14.755146  577087 pod_ready.go:94] pod "kube-proxy-48bqr" is "Ready"
	I1217 19:57:14.755190  577087 pod_ready.go:86] duration metric: took 399.368431ms for pod "kube-proxy-48bqr" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 19:57:14.955826  577087 pod_ready.go:83] waiting for pod "kube-scheduler-pause-318455" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 19:57:15.354857  577087 pod_ready.go:94] pod "kube-scheduler-pause-318455" is "Ready"
	I1217 19:57:15.354891  577087 pod_ready.go:86] duration metric: took 399.033505ms for pod "kube-scheduler-pause-318455" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 19:57:15.354904  577087 pod_ready.go:40] duration metric: took 1.604538612s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1217 19:57:15.418013  577087 start.go:625] kubectl: 1.35.0, cluster: 1.34.3 (minor skew: 1)
	I1217 19:57:15.421537  577087 out.go:179] * Done! kubectl is now configured to use "pause-318455" cluster and "default" namespace by default
	I1217 19:57:13.837115  576365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1217 19:57:13.859258  576365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1217 19:57:13.879116  576365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1217 19:57:13.899336  576365 provision.go:87] duration metric: took 386.601257ms to configureAuth
	I1217 19:57:13.899364  576365 ubuntu.go:206] setting minikube options for container-runtime
	I1217 19:57:13.899553  576365 config.go:182] Loaded profile config "cert-expiration-059470": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 19:57:13.899686  576365 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-059470
	I1217 19:57:13.920676  576365 main.go:143] libmachine: Using SSH client type: native
	I1217 19:57:13.921007  576365 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33383 <nil> <nil>}
	I1217 19:57:13.921026  576365 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1217 19:57:14.244267  576365 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1217 19:57:14.244288  576365 machine.go:97] duration metric: took 4.30398135s to provisionDockerMachine
	I1217 19:57:14.244300  576365 client.go:176] duration metric: took 10.199999275s to LocalClient.Create
	I1217 19:57:14.244324  576365 start.go:167] duration metric: took 10.20005099s to libmachine.API.Create "cert-expiration-059470"
	I1217 19:57:14.244332  576365 start.go:293] postStartSetup for "cert-expiration-059470" (driver="docker")
	I1217 19:57:14.244344  576365 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1217 19:57:14.244403  576365 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1217 19:57:14.244441  576365 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-059470
	I1217 19:57:14.266406  576365 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33383 SSHKeyPath:/home/jenkins/minikube-integration/22186-372245/.minikube/machines/cert-expiration-059470/id_rsa Username:docker}
	I1217 19:57:14.375323  576365 ssh_runner.go:195] Run: cat /etc/os-release
	I1217 19:57:14.380505  576365 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1217 19:57:14.380528  576365 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1217 19:57:14.380538  576365 filesync.go:126] Scanning /home/jenkins/minikube-integration/22186-372245/.minikube/addons for local assets ...
	I1217 19:57:14.380597  576365 filesync.go:126] Scanning /home/jenkins/minikube-integration/22186-372245/.minikube/files for local assets ...
	I1217 19:57:14.380682  576365 filesync.go:149] local asset: /home/jenkins/minikube-integration/22186-372245/.minikube/files/etc/ssl/certs/3757972.pem -> 3757972.pem in /etc/ssl/certs
	I1217 19:57:14.380804  576365 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1217 19:57:14.390165  576365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/files/etc/ssl/certs/3757972.pem --> /etc/ssl/certs/3757972.pem (1708 bytes)
	I1217 19:57:14.415803  576365 start.go:296] duration metric: took 171.455568ms for postStartSetup
	I1217 19:57:14.416263  576365 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" cert-expiration-059470
	I1217 19:57:14.437809  576365 profile.go:143] Saving config to /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/cert-expiration-059470/config.json ...
	I1217 19:57:14.438100  576365 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 19:57:14.438161  576365 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-059470
	I1217 19:57:14.462435  576365 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33383 SSHKeyPath:/home/jenkins/minikube-integration/22186-372245/.minikube/machines/cert-expiration-059470/id_rsa Username:docker}
	I1217 19:57:14.567329  576365 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1217 19:57:14.572475  576365 start.go:128] duration metric: took 10.530771942s to createHost
	I1217 19:57:14.572493  576365 start.go:83] releasing machines lock for "cert-expiration-059470", held for 10.530917701s
	I1217 19:57:14.572556  576365 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" cert-expiration-059470
	I1217 19:57:14.592853  576365 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1217 19:57:14.592852  576365 ssh_runner.go:195] Run: cat /version.json
	I1217 19:57:14.592909  576365 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-059470
	I1217 19:57:14.592937  576365 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-059470
	I1217 19:57:14.615547  576365 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33383 SSHKeyPath:/home/jenkins/minikube-integration/22186-372245/.minikube/machines/cert-expiration-059470/id_rsa Username:docker}
	I1217 19:57:14.617888  576365 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33383 SSHKeyPath:/home/jenkins/minikube-integration/22186-372245/.minikube/machines/cert-expiration-059470/id_rsa Username:docker}
	I1217 19:57:14.720708  576365 ssh_runner.go:195] Run: systemctl --version
	I1217 19:57:14.784825  576365 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1217 19:57:14.826608  576365 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1217 19:57:14.831989  576365 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1217 19:57:14.832050  576365 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1217 19:57:14.867116  576365 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1217 19:57:14.867133  576365 start.go:496] detecting cgroup driver to use...
	I1217 19:57:14.867166  576365 detect.go:190] detected "systemd" cgroup driver on host os
	I1217 19:57:14.867216  576365 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1217 19:57:14.889288  576365 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1217 19:57:14.904332  576365 docker.go:218] disabling cri-docker service (if available) ...
	I1217 19:57:14.904390  576365 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1217 19:57:14.924213  576365 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1217 19:57:14.945909  576365 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1217 19:57:15.055921  576365 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1217 19:57:15.153574  576365 docker.go:234] disabling docker service ...
	I1217 19:57:15.153635  576365 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1217 19:57:15.175751  576365 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1217 19:57:15.190459  576365 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1217 19:57:15.280780  576365 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1217 19:57:15.401517  576365 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1217 19:57:15.424566  576365 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1217 19:57:15.449126  576365 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1217 19:57:15.449181  576365 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 19:57:15.466072  576365 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1217 19:57:15.466193  576365 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 19:57:15.486909  576365 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 19:57:15.503896  576365 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 19:57:15.522520  576365 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1217 19:57:15.539383  576365 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 19:57:15.554054  576365 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 19:57:15.576663  576365 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 19:57:15.587513  576365 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1217 19:57:15.596959  576365 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1217 19:57:15.607708  576365 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 19:57:15.723409  576365 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1217 19:57:17.276603  576365 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1.55316649s)
	I1217 19:57:17.276626  576365 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1217 19:57:17.276683  576365 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1217 19:57:17.280920  576365 start.go:564] Will wait 60s for crictl version
	I1217 19:57:17.280973  576365 ssh_runner.go:195] Run: which crictl
	I1217 19:57:17.285110  576365 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1217 19:57:17.315574  576365 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1217 19:57:17.315670  576365 ssh_runner.go:195] Run: crio --version
	I1217 19:57:17.346480  576365 ssh_runner.go:195] Run: crio --version
	I1217 19:57:17.380717  576365 out.go:179] * Preparing Kubernetes v1.34.3 on CRI-O 1.34.3 ...
	I1217 19:57:17.382246  576365 cli_runner.go:164] Run: docker network inspect cert-expiration-059470 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1217 19:57:17.400061  576365 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1217 19:57:17.404471  576365 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 19:57:17.414867  576365 kubeadm.go:884] updating cluster {Name:cert-expiration-059470 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:cert-expiration-059470 Namespace:default APIServerHAVIP: APIServerName:mini
kubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:3m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAge
ntPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1217 19:57:17.414984  576365 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1217 19:57:17.415026  576365 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 19:57:17.447257  576365 crio.go:514] all images are preloaded for cri-o runtime.
	I1217 19:57:17.447274  576365 crio.go:433] Images already preloaded, skipping extraction
	I1217 19:57:17.447338  576365 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 19:57:17.476150  576365 crio.go:514] all images are preloaded for cri-o runtime.
	I1217 19:57:17.476164  576365 cache_images.go:86] Images are preloaded, skipping loading
	I1217 19:57:17.476170  576365 kubeadm.go:935] updating node { 192.168.94.2 8443 v1.34.3 crio true true} ...
	I1217 19:57:17.476273  576365 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=cert-expiration-059470 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.3 ClusterName:cert-expiration-059470 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1217 19:57:17.476362  576365 ssh_runner.go:195] Run: crio config
	I1217 19:57:17.530264  576365 cni.go:84] Creating CNI manager for ""
	I1217 19:57:17.530282  576365 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1217 19:57:17.530305  576365 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1217 19:57:17.530332  576365 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.34.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:cert-expiration-059470 NodeName:cert-expiration-059470 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPod
Path:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1217 19:57:17.530493  576365 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "cert-expiration-059470"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1217 19:57:17.530554  576365 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.3
	I1217 19:57:17.540856  576365 binaries.go:51] Found k8s binaries, skipping transfer
	I1217 19:57:17.540937  576365 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1217 19:57:17.550164  576365 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1217 19:57:17.564919  576365 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1217 19:57:17.722795  576365 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2218 bytes)
	I1217 19:57:17.738163  576365 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1217 19:57:17.742134  576365 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 19:57:17.801097  576365 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 19:57:17.888902  576365 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 19:57:17.911905  576365 certs.go:69] Setting up /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/cert-expiration-059470 for IP: 192.168.94.2
	I1217 19:57:17.911920  576365 certs.go:195] generating shared ca certs ...
	I1217 19:57:17.911939  576365 certs.go:227] acquiring lock for ca certs: {Name:mk6c0a4a99609de13fb0b54aca94f9165cc7856c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 19:57:17.912167  576365 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22186-372245/.minikube/ca.key
	I1217 19:57:17.912219  576365 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22186-372245/.minikube/proxy-client-ca.key
	I1217 19:57:17.912226  576365 certs.go:257] generating profile certs ...
	I1217 19:57:17.912297  576365 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/cert-expiration-059470/client.key
	I1217 19:57:17.912311  576365 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/cert-expiration-059470/client.crt with IP's: []
	I1217 19:57:18.003089  576365 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/cert-expiration-059470/client.crt ...
	I1217 19:57:18.003109  576365 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/cert-expiration-059470/client.crt: {Name:mk00a4411a297435489939010c4a81b3ffa2dffd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 19:57:18.004188  576365 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/cert-expiration-059470/client.key ...
	I1217 19:57:18.004204  576365 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/cert-expiration-059470/client.key: {Name:mke86d5fb98502bc2d24281f40b9830d439efe09 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 19:57:18.004351  576365 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/cert-expiration-059470/apiserver.key.43d27db5
	I1217 19:57:18.004365  576365 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/cert-expiration-059470/apiserver.crt.43d27db5 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.94.2]
	I1217 19:57:18.039427  576365 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/cert-expiration-059470/apiserver.crt.43d27db5 ...
	I1217 19:57:18.039444  576365 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/cert-expiration-059470/apiserver.crt.43d27db5: {Name:mk97895c73a08015950630f186ed775ca2b9d3f5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 19:57:18.039620  576365 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/cert-expiration-059470/apiserver.key.43d27db5 ...
	I1217 19:57:18.039631  576365 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/cert-expiration-059470/apiserver.key.43d27db5: {Name:mkd130b276c2ce0f59c92e373d1f7b997fbe42f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 19:57:18.039736  576365 certs.go:382] copying /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/cert-expiration-059470/apiserver.crt.43d27db5 -> /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/cert-expiration-059470/apiserver.crt
	I1217 19:57:18.039807  576365 certs.go:386] copying /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/cert-expiration-059470/apiserver.key.43d27db5 -> /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/cert-expiration-059470/apiserver.key
	I1217 19:57:18.039854  576365 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/cert-expiration-059470/proxy-client.key
	I1217 19:57:18.039867  576365 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/cert-expiration-059470/proxy-client.crt with IP's: []
	I1217 19:57:18.072648  576365 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/cert-expiration-059470/proxy-client.crt ...
	I1217 19:57:18.072675  576365 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/cert-expiration-059470/proxy-client.crt: {Name:mk18a51335de18e5575fc360482651b7bd482d79 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 19:57:18.072927  576365 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/cert-expiration-059470/proxy-client.key ...
	I1217 19:57:18.072946  576365 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/cert-expiration-059470/proxy-client.key: {Name:mk2e167ef521d1f9d50e52d0a5fa439a6a91e579 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 19:57:18.073233  576365 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-372245/.minikube/certs/375797.pem (1338 bytes)
	W1217 19:57:18.073276  576365 certs.go:480] ignoring /home/jenkins/minikube-integration/22186-372245/.minikube/certs/375797_empty.pem, impossibly tiny 0 bytes
	I1217 19:57:18.073283  576365 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-372245/.minikube/certs/ca-key.pem (1675 bytes)
	I1217 19:57:18.073377  576365 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-372245/.minikube/certs/ca.pem (1082 bytes)
	I1217 19:57:18.073407  576365 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-372245/.minikube/certs/cert.pem (1123 bytes)
	I1217 19:57:18.073429  576365 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-372245/.minikube/certs/key.pem (1675 bytes)
	I1217 19:57:18.073471  576365 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-372245/.minikube/files/etc/ssl/certs/3757972.pem (1708 bytes)
	I1217 19:57:18.074062  576365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1217 19:57:18.100260  576365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1217 19:57:18.124004  576365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1217 19:57:18.147108  576365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1217 19:57:18.170815  576365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/cert-expiration-059470/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1217 19:57:18.196594  576365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/cert-expiration-059470/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1217 19:57:18.219760  576365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/cert-expiration-059470/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1217 19:57:18.238750  576365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/cert-expiration-059470/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1217 19:57:18.263939  576365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 19:57:18.298236  576365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/certs/375797.pem --> /usr/share/ca-certificates/375797.pem (1338 bytes)
	I1217 19:57:18.318976  576365 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/files/etc/ssl/certs/3757972.pem --> /usr/share/ca-certificates/3757972.pem (1708 bytes)
	I1217 19:57:18.338619  576365 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1217 19:57:18.359861  576365 ssh_runner.go:195] Run: openssl version
	I1217 19:57:18.369604  576365 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1217 19:57:18.378989  576365 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1217 19:57:18.388051  576365 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1217 19:57:18.392446  576365 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 19:24 /usr/share/ca-certificates/minikubeCA.pem
	I1217 19:57:18.392497  576365 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1217 19:57:18.436614  576365 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1217 19:57:18.446177  576365 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1217 19:57:18.455848  576365 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/375797.pem
	I1217 19:57:18.464715  576365 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/375797.pem /etc/ssl/certs/375797.pem
	I1217 19:57:18.475314  576365 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/375797.pem
	I1217 19:57:18.479982  576365 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 17 19:32 /usr/share/ca-certificates/375797.pem
	I1217 19:57:18.480038  576365 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/375797.pem
	I1217 19:57:18.524414  576365 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1217 19:57:18.534848  576365 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/375797.pem /etc/ssl/certs/51391683.0
	I1217 19:57:18.543350  576365 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3757972.pem
	I1217 19:57:18.553043  576365 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3757972.pem /etc/ssl/certs/3757972.pem
	I1217 19:57:18.562702  576365 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3757972.pem
	I1217 19:57:18.567103  576365 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 17 19:32 /usr/share/ca-certificates/3757972.pem
	I1217 19:57:18.567163  576365 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3757972.pem
	I1217 19:57:18.618981  576365 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1217 19:57:18.628370  576365 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/3757972.pem /etc/ssl/certs/3ec20f2e.0
	I1217 19:57:18.636503  576365 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1217 19:57:18.640949  576365 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1217 19:57:18.641015  576365 kubeadm.go:401] StartCluster: {Name:cert-expiration-059470 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:cert-expiration-059470 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:3m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentP
ID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 19:57:18.641120  576365 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1217 19:57:18.641170  576365 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 19:57:18.671480  576365 cri.go:89] found id: ""
	I1217 19:57:18.671550  576365 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1217 19:57:18.683184  576365 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1217 19:57:18.694853  576365 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1217 19:57:18.694913  576365 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1217 19:57:18.708614  576365 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1217 19:57:18.708631  576365 kubeadm.go:158] found existing configuration files:
	
	I1217 19:57:18.708684  576365 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1217 19:57:18.718896  576365 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1217 19:57:18.718952  576365 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1217 19:57:18.727865  576365 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1217 19:57:18.737489  576365 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1217 19:57:18.737544  576365 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1217 19:57:18.746593  576365 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1217 19:57:18.758632  576365 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1217 19:57:18.758705  576365 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1217 19:57:18.771575  576365 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1217 19:57:18.784869  576365 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1217 19:57:18.784935  576365 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1217 19:57:18.796680  576365 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1217 19:57:14.256001  580641 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1217 19:57:14.256320  580641 start.go:159] libmachine.API.Create for "force-systemd-flag-134068" (driver="docker")
	I1217 19:57:14.256362  580641 client.go:173] LocalClient.Create starting
	I1217 19:57:14.256441  580641 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22186-372245/.minikube/certs/ca.pem
	I1217 19:57:14.256485  580641 main.go:143] libmachine: Decoding PEM data...
	I1217 19:57:14.256514  580641 main.go:143] libmachine: Parsing certificate...
	I1217 19:57:14.256590  580641 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22186-372245/.minikube/certs/cert.pem
	I1217 19:57:14.256625  580641 main.go:143] libmachine: Decoding PEM data...
	I1217 19:57:14.256643  580641 main.go:143] libmachine: Parsing certificate...
	I1217 19:57:14.257033  580641 cli_runner.go:164] Run: docker network inspect force-systemd-flag-134068 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1217 19:57:14.276899  580641 cli_runner.go:211] docker network inspect force-systemd-flag-134068 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1217 19:57:14.276974  580641 network_create.go:284] running [docker network inspect force-systemd-flag-134068] to gather additional debugging logs...
	I1217 19:57:14.277003  580641 cli_runner.go:164] Run: docker network inspect force-systemd-flag-134068
	W1217 19:57:14.297789  580641 cli_runner.go:211] docker network inspect force-systemd-flag-134068 returned with exit code 1
	I1217 19:57:14.297824  580641 network_create.go:287] error running [docker network inspect force-systemd-flag-134068]: docker network inspect force-systemd-flag-134068: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network force-systemd-flag-134068 not found
	I1217 19:57:14.297840  580641 network_create.go:289] output of [docker network inspect force-systemd-flag-134068]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network force-systemd-flag-134068 not found
	
	** /stderr **
	I1217 19:57:14.297959  580641 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1217 19:57:14.320422  580641 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-f64340259533 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:f6:0a:32:70:0d:35} reservation:<nil>}
	I1217 19:57:14.321302  580641 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-67abe6566c60 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:42:82:43:08:7c:e3} reservation:<nil>}
	I1217 19:57:14.321932  580641 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-f76d03f2ebfd IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:8e:bb:9b:fb:af:46} reservation:<nil>}
	I1217 19:57:14.322796  580641 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-a7a7a6de88fd IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:4a:87:06:7a:45:a7} reservation:<nil>}
	I1217 19:57:14.323578  580641 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-8206734db8de IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:b2:1e:84:59:81:4f} reservation:<nil>}
	I1217 19:57:14.324374  580641 network.go:211] skipping subnet 192.168.94.0/24 that is taken: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName:br-a8fdc05f236b IfaceIPv4:192.168.94.1 IfaceMTU:1500 IfaceMAC:3e:59:80:d3:98:cc} reservation:<nil>}
	I1217 19:57:14.325201  580641 network.go:206] using free private subnet 192.168.103.0/24: &{IP:192.168.103.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.103.0/24 Gateway:192.168.103.1 ClientMin:192.168.103.2 ClientMax:192.168.103.254 Broadcast:192.168.103.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001fd0550}
	I1217 19:57:14.325235  580641 network_create.go:124] attempt to create docker network force-systemd-flag-134068 192.168.103.0/24 with gateway 192.168.103.1 and MTU of 1500 ...
	I1217 19:57:14.325285  580641 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.103.0/24 --gateway=192.168.103.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-flag-134068 force-systemd-flag-134068
	I1217 19:57:14.381025  580641 network_create.go:108] docker network force-systemd-flag-134068 192.168.103.0/24 created
	I1217 19:57:14.381059  580641 kic.go:121] calculated static IP "192.168.103.2" for the "force-systemd-flag-134068" container
	I1217 19:57:14.381149  580641 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1217 19:57:14.404349  580641 cli_runner.go:164] Run: docker volume create force-systemd-flag-134068 --label name.minikube.sigs.k8s.io=force-systemd-flag-134068 --label created_by.minikube.sigs.k8s.io=true
	I1217 19:57:14.426556  580641 oci.go:103] Successfully created a docker volume force-systemd-flag-134068
	I1217 19:57:14.426674  580641 cli_runner.go:164] Run: docker run --rm --name force-systemd-flag-134068-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-flag-134068 --entrypoint /usr/bin/test -v force-systemd-flag-134068:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 -d /var/lib
	I1217 19:57:14.871548  580641 oci.go:107] Successfully prepared a docker volume force-systemd-flag-134068
	I1217 19:57:14.871623  580641 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1217 19:57:14.871636  580641 kic.go:194] Starting extracting preloaded images to volume ...
	I1217 19:57:14.871731  580641 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22186-372245/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v force-systemd-flag-134068:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 -I lz4 -xf /preloaded.tar -C /extractDir
	I1217 19:57:18.038275  580641 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22186-372245/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v force-systemd-flag-134068:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 -I lz4 -xf /preloaded.tar -C /extractDir: (3.166474641s)
	I1217 19:57:18.038308  580641 kic.go:203] duration metric: took 3.166669008s to extract preloaded images to volume ...
	W1217 19:57:18.038405  580641 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1217 19:57:18.038455  580641 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1217 19:57:18.038577  580641 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1217 19:57:18.107691  580641 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname force-systemd-flag-134068 --name force-systemd-flag-134068 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-flag-134068 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=force-systemd-flag-134068 --network force-systemd-flag-134068 --ip 192.168.103.2 --volume force-systemd-flag-134068:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0
	I1217 19:57:18.426343  580641 cli_runner.go:164] Run: docker container inspect force-systemd-flag-134068 --format={{.State.Running}}
	I1217 19:57:18.449267  580641 cli_runner.go:164] Run: docker container inspect force-systemd-flag-134068 --format={{.State.Status}}
	I1217 19:57:18.471362  580641 cli_runner.go:164] Run: docker exec force-systemd-flag-134068 stat /var/lib/dpkg/alternatives/iptables
	I1217 19:57:18.521616  580641 oci.go:144] the created container "force-systemd-flag-134068" has a running status.
	I1217 19:57:18.521654  580641 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22186-372245/.minikube/machines/force-systemd-flag-134068/id_rsa...
	I1217 19:57:18.662040  580641 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/22186-372245/.minikube/machines/force-systemd-flag-134068/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1217 19:57:18.662113  580641 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22186-372245/.minikube/machines/force-systemd-flag-134068/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1217 19:57:18.699542  580641 cli_runner.go:164] Run: docker container inspect force-systemd-flag-134068 --format={{.State.Status}}
	I1217 19:57:18.725303  580641 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1217 19:57:18.725328  580641 kic_runner.go:114] Args: [docker exec --privileged force-systemd-flag-134068 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1217 19:57:18.783756  580641 cli_runner.go:164] Run: docker container inspect force-systemd-flag-134068 --format={{.State.Status}}
	I1217 19:57:18.809536  580641 machine.go:94] provisionDockerMachine start ...
	I1217 19:57:18.809647  580641 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-134068
	I1217 19:57:18.835239  580641 main.go:143] libmachine: Using SSH client type: native
	I1217 19:57:18.835615  580641 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33388 <nil> <nil>}
	I1217 19:57:18.835631  580641 main.go:143] libmachine: About to run SSH command:
	hostname
	I1217 19:57:18.996610  580641 main.go:143] libmachine: SSH cmd err, output: <nil>: force-systemd-flag-134068
	
	I1217 19:57:18.996645  580641 ubuntu.go:182] provisioning hostname "force-systemd-flag-134068"
	I1217 19:57:18.996763  580641 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-134068
	I1217 19:57:19.018413  580641 main.go:143] libmachine: Using SSH client type: native
	I1217 19:57:19.018830  580641 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33388 <nil> <nil>}
	I1217 19:57:19.018879  580641 main.go:143] libmachine: About to run SSH command:
	sudo hostname force-systemd-flag-134068 && echo "force-systemd-flag-134068" | sudo tee /etc/hostname
	
	
	==> CRI-O <==
	Dec 17 19:57:12 pause-318455 crio[2243]: time="2025-12-17T19:57:12.101268637Z" level=info msg="RDT not available in the host system"
	Dec 17 19:57:12 pause-318455 crio[2243]: time="2025-12-17T19:57:12.101277621Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Dec 17 19:57:12 pause-318455 crio[2243]: time="2025-12-17T19:57:12.102327104Z" level=info msg="Conmon does support the --sync option"
	Dec 17 19:57:12 pause-318455 crio[2243]: time="2025-12-17T19:57:12.102351274Z" level=info msg="Conmon does support the --log-global-size-max option"
	Dec 17 19:57:12 pause-318455 crio[2243]: time="2025-12-17T19:57:12.102368664Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Dec 17 19:57:12 pause-318455 crio[2243]: time="2025-12-17T19:57:12.103207133Z" level=info msg="Conmon does support the --sync option"
	Dec 17 19:57:12 pause-318455 crio[2243]: time="2025-12-17T19:57:12.103228079Z" level=info msg="Conmon does support the --log-global-size-max option"
	Dec 17 19:57:12 pause-318455 crio[2243]: time="2025-12-17T19:57:12.108232177Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Dec 17 19:57:12 pause-318455 crio[2243]: time="2025-12-17T19:57:12.108266477Z" level=info msg="Updated default CNI network name to kindnet"
	Dec 17 19:57:12 pause-318455 crio[2243]: time="2025-12-17T19:57:12.108761085Z" level=info msg="Current CRI-O configuration:\n[crio]\n  root = \"/var/lib/containers/storage\"\n  runroot = \"/run/containers/storage\"\n  imagestore = \"\"\n  storage_driver = \"overlay\"\n  log_dir = \"/var/log/crio/pods\"\n  version_file = \"/var/run/crio/version\"\n  version_file_persist = \"\"\n  clean_shutdown_file = \"/var/lib/crio/clean.shutdown\"\n  internal_wipe = true\n  internal_repair = true\n  [crio.api]\n    grpc_max_send_msg_size = 83886080\n    grpc_max_recv_msg_size = 83886080\n    listen = \"/var/run/crio/crio.sock\"\n    stream_address = \"127.0.0.1\"\n    stream_port = \"0\"\n    stream_enable_tls = false\n    stream_tls_cert = \"\"\n    stream_tls_key = \"\"\n    stream_tls_ca = \"\"\n    stream_idle_timeout = \"\"\n  [crio.runtime]\n    no_pivot = false\n    selinux = false\n    log_to_journald = false\n    drop_infra_ctr = true\n    read_only = false\n    hooks_dir = [\"/usr/share/containers/oci/hoo
ks.d\"]\n    default_capabilities = [\"CHOWN\", \"DAC_OVERRIDE\", \"FSETID\", \"FOWNER\", \"SETGID\", \"SETUID\", \"SETPCAP\", \"NET_BIND_SERVICE\", \"KILL\"]\n    add_inheritable_capabilities = false\n    default_sysctls = [\"net.ipv4.ip_unprivileged_port_start=0\"]\n    allowed_devices = [\"/dev/fuse\", \"/dev/net/tun\"]\n    cdi_spec_dirs = [\"/etc/cdi\", \"/var/run/cdi\"]\n    device_ownership_from_security_context = false\n    default_runtime = \"crun\"\n    decryption_keys_path = \"/etc/crio/keys/\"\n    conmon = \"\"\n    conmon_cgroup = \"pod\"\n    seccomp_profile = \"\"\n    privileged_seccomp_profile = \"\"\n    apparmor_profile = \"crio-default\"\n    blockio_config_file = \"\"\n    blockio_reload = false\n    irqbalance_config_file = \"/etc/sysconfig/irqbalance\"\n    rdt_config_file = \"\"\n    cgroup_manager = \"systemd\"\n    default_mounts_file = \"\"\n    container_exits_dir = \"/var/run/crio/exits\"\n    container_attach_socket_dir = \"/var/run/crio\"\n    bind_mount_prefix = \"\"\n    uid_
mappings = \"\"\n    minimum_mappable_uid = -1\n    gid_mappings = \"\"\n    minimum_mappable_gid = -1\n    log_level = \"info\"\n    log_filter = \"\"\n    namespaces_dir = \"/var/run\"\n    pinns_path = \"/usr/bin/pinns\"\n    enable_criu_support = false\n    pids_limit = -1\n    log_size_max = -1\n    ctr_stop_timeout = 30\n    separate_pull_cgroup = \"\"\n    infra_ctr_cpuset = \"\"\n    shared_cpuset = \"\"\n    enable_pod_events = false\n    irqbalance_config_restore_file = \"/etc/sysconfig/orig_irq_banned_cpus\"\n    hostnetwork_disable_selinux = true\n    disable_hostport_mapping = false\n    timezone = \"\"\n    [crio.runtime.runtimes]\n      [crio.runtime.runtimes.crun]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/crun\"\n        runtime_type = \"\"\n        runtime_root = \"/run/crun\"\n        allowed_annotations = [\"io.containers.trace-syscall\"]\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory
= \"12MiB\"\n        no_sync_log = false\n      [crio.runtime.runtimes.runc]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/runc\"\n        runtime_type = \"\"\n        runtime_root = \"/run/runc\"\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory = \"12MiB\"\n        no_sync_log = false\n  [crio.image]\n    default_transport = \"docker://\"\n    global_auth_file = \"\"\n    namespaced_auth_dir = \"/etc/crio/auth\"\n    pause_image = \"registry.k8s.io/pause:3.10.1\"\n    pause_image_auth_file = \"\"\n    pause_command = \"/pause\"\n    signature_policy = \"/etc/crio/policy.json\"\n    signature_policy_dir = \"/etc/crio/policies\"\n    image_volumes = \"mkdir\"\n    big_files_temporary_dir = \"\"\n    auto_reload_registries = false\n    pull_progress_timeout = \"0s\"\n    oci_artifact_mount_support = true\n    short_name_mode = \"enforcing\"\n  [crio.network]\n    cni_default_network = \"\"\n    network_dir = \
"/etc/cni/net.d/\"\n    plugin_dirs = [\"/opt/cni/bin/\"]\n  [crio.metrics]\n    enable_metrics = false\n    metrics_collectors = [\"image_pulls_layer_size\", \"containers_events_dropped_total\", \"containers_oom_total\", \"processes_defunct\", \"operations_total\", \"operations_latency_seconds\", \"operations_latency_seconds_total\", \"operations_errors_total\", \"image_pulls_bytes_total\", \"image_pulls_skipped_bytes_total\", \"image_pulls_failure_total\", \"image_pulls_success_total\", \"image_layer_reuse_total\", \"containers_oom_count_total\", \"containers_seccomp_notifier_count_total\", \"resources_stalled_at_stage\", \"containers_stopped_monitor_count\"]\n    metrics_host = \"127.0.0.1\"\n    metrics_port = 9090\n    metrics_socket = \"\"\n    metrics_cert = \"\"\n    metrics_key = \"\"\n  [crio.tracing]\n    enable_tracing = false\n    tracing_endpoint = \"127.0.0.1:4317\"\n    tracing_sampling_rate_per_million = 0\n  [crio.stats]\n    stats_collection_period = 0\n    collection_period = 0\n  [crio.nr
i]\n    enable_nri = true\n    nri_listen = \"/var/run/nri/nri.sock\"\n    nri_plugin_dir = \"/opt/nri/plugins\"\n    nri_plugin_config_dir = \"/etc/nri/conf.d\"\n    nri_plugin_registration_timeout = \"5s\"\n    nri_plugin_request_timeout = \"2s\"\n    nri_disable_connections = false\n    [crio.nri.default_validator]\n      nri_enable_default_validator = false\n      nri_validator_reject_oci_hook_adjustment = false\n      nri_validator_reject_runtime_default_seccomp_adjustment = false\n      nri_validator_reject_unconfined_seccomp_adjustment = false\n      nri_validator_reject_custom_seccomp_adjustment = false\n      nri_validator_reject_namespace_adjustment = false\n      nri_validator_tolerate_missing_plugins_annotation = \"\"\n"
	Dec 17 19:57:12 pause-318455 crio[2243]: time="2025-12-17T19:57:12.109207667Z" level=info msg="Attempting to restore irqbalance config from /etc/sysconfig/orig_irq_banned_cpus"
	Dec 17 19:57:12 pause-318455 crio[2243]: time="2025-12-17T19:57:12.109261Z" level=info msg="Restore irqbalance config: failed to get current CPU ban list, ignoring"
	Dec 17 19:57:12 pause-318455 crio[2243]: time="2025-12-17T19:57:12.19479336Z" level=info msg="Got pod network &{Name:coredns-66bc5c9577-l2sfj Namespace:kube-system ID:af67eebdf33e512a001c1d3a8d9a79dfc3086ae8b510b9d329e93b9eaa38aa29 UID:01975478-9e8c-4475-b2d0-82166c6a60a4 NetNS:/var/run/netns/ec3f0e86-4ded-4f50-97d5-40914ecc9f0a Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc0009223e8}] Aliases:map[]}"
	Dec 17 19:57:12 pause-318455 crio[2243]: time="2025-12-17T19:57:12.195006917Z" level=info msg="Checking pod kube-system_coredns-66bc5c9577-l2sfj for CNI network kindnet (type=ptp)"
	Dec 17 19:57:12 pause-318455 crio[2243]: time="2025-12-17T19:57:12.19557839Z" level=info msg="Registered SIGHUP reload watcher"
	Dec 17 19:57:12 pause-318455 crio[2243]: time="2025-12-17T19:57:12.195605547Z" level=info msg="Starting seccomp notifier watcher"
	Dec 17 19:57:12 pause-318455 crio[2243]: time="2025-12-17T19:57:12.195674557Z" level=info msg="Create NRI interface"
	Dec 17 19:57:12 pause-318455 crio[2243]: time="2025-12-17T19:57:12.195810437Z" level=info msg="built-in NRI default validator is disabled"
	Dec 17 19:57:12 pause-318455 crio[2243]: time="2025-12-17T19:57:12.195827211Z" level=info msg="runtime interface created"
	Dec 17 19:57:12 pause-318455 crio[2243]: time="2025-12-17T19:57:12.195844027Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Dec 17 19:57:12 pause-318455 crio[2243]: time="2025-12-17T19:57:12.195851985Z" level=info msg="runtime interface starting up..."
	Dec 17 19:57:12 pause-318455 crio[2243]: time="2025-12-17T19:57:12.195859437Z" level=info msg="starting plugins..."
	Dec 17 19:57:12 pause-318455 crio[2243]: time="2025-12-17T19:57:12.195896674Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Dec 17 19:57:12 pause-318455 crio[2243]: time="2025-12-17T19:57:12.196262287Z" level=info msg="No systemd watchdog enabled"
	Dec 17 19:57:12 pause-318455 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	8a890307848d3       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                     16 seconds ago      Running             coredns                   0                   af67eebdf33e5       coredns-66bc5c9577-l2sfj               kube-system
	dece30b73bfce       docker.io/kindest/kindnetd@sha256:7c22558dc06a570d46ea6e8a73b23cdc754eb81f7c08d3441a3171ad359ffc27   27 seconds ago      Running             kindnet-cni               0                   a7145c38c7be2       kindnet-z5f74                          kube-system
	19248a249a354       36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691                                     30 seconds ago      Running             kube-proxy                0                   7df30a119177c       kube-proxy-48bqr                       kube-system
	ede91caa7f2fc       aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78                                     42 seconds ago      Running             kube-scheduler            0                   f4bc5ececc182       kube-scheduler-pause-318455            kube-system
	2cee54e9215fa       aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c                                     42 seconds ago      Running             kube-apiserver            0                   4fd64a24da55e       kube-apiserver-pause-318455            kube-system
	12f0a8e54bc78       5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942                                     42 seconds ago      Running             kube-controller-manager   0                   4d9752cdd0281       kube-controller-manager-pause-318455   kube-system
	76b691e5433f6       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                     42 seconds ago      Running             etcd                      0                   c6dbc67527147       etcd-pause-318455                      kube-system
	
	
	==> coredns [8a890307848d3863ac5dda4d27388c617ecb303c809f2a7fc9317b22fb60fda7] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:47913 - 31426 "HINFO IN 4982629858459019526.2784063241529817807. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.022807116s
	
	
	==> describe nodes <==
	Name:               pause-318455
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-318455
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2e96f676eb7e96389e85fe0658a4ede4c4ba6924
	                    minikube.k8s.io/name=pause-318455
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_17T19_56_44_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Dec 2025 19:56:41 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-318455
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Dec 2025 19:57:13 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Dec 2025 19:57:14 +0000   Wed, 17 Dec 2025 19:56:38 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Dec 2025 19:57:14 +0000   Wed, 17 Dec 2025 19:56:38 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Dec 2025 19:57:14 +0000   Wed, 17 Dec 2025 19:56:38 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Dec 2025 19:57:14 +0000   Wed, 17 Dec 2025 19:57:04 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    pause-318455
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 99cc213c06a11cdf07b2a4d26942818a
	  System UUID:                cf2bbaf9-e321-41e3-b873-6f662bae94bb
	  Boot ID:                    832664c8-407a-4bff-a432-3bbc3f20421e
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.3
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-l2sfj                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     32s
	  kube-system                 etcd-pause-318455                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         38s
	  kube-system                 kindnet-z5f74                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      32s
	  kube-system                 kube-apiserver-pause-318455             250m (3%)     0 (0%)      0 (0%)           0 (0%)         38s
	  kube-system                 kube-controller-manager-pause-318455    200m (2%)     0 (0%)      0 (0%)           0 (0%)         38s
	  kube-system                 kube-proxy-48bqr                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         32s
	  kube-system                 kube-scheduler-pause-318455             100m (1%)     0 (0%)      0 (0%)           0 (0%)         38s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 29s                kube-proxy       
	  Normal  NodeHasSufficientMemory  43s (x8 over 43s)  kubelet          Node pause-318455 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    43s (x8 over 43s)  kubelet          Node pause-318455 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     43s (x8 over 43s)  kubelet          Node pause-318455 status is now: NodeHasSufficientPID
	  Normal  Starting                 38s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  38s                kubelet          Node pause-318455 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    38s                kubelet          Node pause-318455 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     38s                kubelet          Node pause-318455 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           33s                node-controller  Node pause-318455 event: Registered Node pause-318455 in Controller
	  Normal  NodeReady                17s                kubelet          Node pause-318455 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 02 bf cf fd 8a f3 08 06
	[  +0.000372] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 46 d7 50 f9 50 96 08 06
	[Dec17 19:26] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000011] ll header: 00000000: 12 b8 6e 1b fb 93 de a2 46 23 bd 1e 08 00
	[  +1.015318] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 12 b8 6e 1b fb 93 de a2 46 23 bd 1e 08 00
	[  +1.023837] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 12 b8 6e 1b fb 93 de a2 46 23 bd 1e 08 00
	[  +1.023872] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 12 b8 6e 1b fb 93 de a2 46 23 bd 1e 08 00
	[  +1.023881] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 12 b8 6e 1b fb 93 de a2 46 23 bd 1e 08 00
	[  +1.023899] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 12 b8 6e 1b fb 93 de a2 46 23 bd 1e 08 00
	[  +2.047807] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: 12 b8 6e 1b fb 93 de a2 46 23 bd 1e 08 00
	[  +4.031540] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000022] ll header: 00000000: 12 b8 6e 1b fb 93 de a2 46 23 bd 1e 08 00
	[  +8.319118] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: 12 b8 6e 1b fb 93 de a2 46 23 bd 1e 08 00
	[ +16.382218] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 12 b8 6e 1b fb 93 de a2 46 23 bd 1e 08 00
	[Dec17 19:27] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 12 b8 6e 1b fb 93 de a2 46 23 bd 1e 08 00
	
	
	==> etcd [76b691e5433f67fe8b6ba2acd73106fa663b879e6b9059c7bba6777dd6049659] <==
	{"level":"info","ts":"2025-12-17T19:56:50.161032Z","caller":"traceutil/trace.go:172","msg":"trace[430430009] transaction","detail":"{read_only:false; response_revision:382; number_of_response:1; }","duration":"320.311044ms","start":"2025-12-17T19:56:49.840706Z","end":"2025-12-17T19:56:50.161017Z","steps":["trace[430430009] 'process raft request'  (duration: 320.100512ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-17T19:56:50.161067Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-12-17T19:56:49.842244Z","time spent":"318.771899ms","remote":"127.0.0.1:59842","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":4086,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/replicasets/kube-system/coredns-66bc5c9577\" mod_revision:373 > success:<request_put:<key:\"/registry/replicasets/kube-system/coredns-66bc5c9577\" value_size:4026 >> failure:<request_range:<key:\"/registry/replicasets/kube-system/coredns-66bc5c9577\" > >"}
	{"level":"warn","ts":"2025-12-17T19:56:50.161067Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-12-17T19:56:49.838348Z","time spent":"322.668138ms","remote":"127.0.0.1:59828","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":2863,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/daemonsets/kube-system/kube-proxy\" mod_revision:371 > success:<request_put:<key:\"/registry/daemonsets/kube-system/kube-proxy\" value_size:2812 >> failure:<request_range:<key:\"/registry/daemonsets/kube-system/kube-proxy\" > >"}
	{"level":"warn","ts":"2025-12-17T19:56:50.161135Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-12-17T19:56:49.840673Z","time spent":"320.389902ms","remote":"127.0.0.1:59828","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":4704,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/daemonsets/kube-system/kindnet\" mod_revision:370 > success:<request_put:<key:\"/registry/daemonsets/kube-system/kindnet\" value_size:4656 >> failure:<request_range:<key:\"/registry/daemonsets/kube-system/kindnet\" > >"}
	{"level":"warn","ts":"2025-12-17T19:56:50.488248Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"233.598822ms","expected-duration":"100ms","prefix":"","request":"header:<ID:9722597791944127917 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/deployments/kube-system/coredns\" mod_revision:339 > success:<request_put:<key:\"/registry/deployments/kube-system/coredns\" value_size:4274 >> failure:<request_range:<key:\"/registry/deployments/kube-system/coredns\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-12-17T19:56:50.488333Z","caller":"traceutil/trace.go:172","msg":"trace[431016538] transaction","detail":"{read_only:false; response_revision:385; number_of_response:1; }","duration":"319.944753ms","start":"2025-12-17T19:56:50.168374Z","end":"2025-12-17T19:56:50.488319Z","steps":["trace[431016538] 'process raft request'  (duration: 86.207609ms)","trace[431016538] 'compare'  (duration: 233.241961ms)"],"step_count":2}
	{"level":"warn","ts":"2025-12-17T19:56:50.488396Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-12-17T19:56:50.168357Z","time spent":"320.003283ms","remote":"127.0.0.1:59792","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":4323,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/deployments/kube-system/coredns\" mod_revision:339 > success:<request_put:<key:\"/registry/deployments/kube-system/coredns\" value_size:4274 >> failure:<request_range:<key:\"/registry/deployments/kube-system/coredns\" > >"}
	{"level":"info","ts":"2025-12-17T19:56:50.497262Z","caller":"traceutil/trace.go:172","msg":"trace[2084870308] linearizableReadLoop","detail":"{readStateIndex:396; appliedIndex:396; }","duration":"122.216703ms","start":"2025-12-17T19:56:50.375024Z","end":"2025-12-17T19:56:50.497241Z","steps":["trace[2084870308] 'read index received'  (duration: 122.208275ms)","trace[2084870308] 'applied index is now lower than readState.Index'  (duration: 7.393µs)"],"step_count":2}
	{"level":"warn","ts":"2025-12-17T19:56:50.497444Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"122.403737ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/kindnet\" limit:1 ","response":"range_response_count:1 size:520"}
	{"level":"info","ts":"2025-12-17T19:56:50.497467Z","caller":"traceutil/trace.go:172","msg":"trace[487005176] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/kindnet; range_end:; response_count:1; response_revision:385; }","duration":"122.447566ms","start":"2025-12-17T19:56:50.375013Z","end":"2025-12-17T19:56:50.497460Z","steps":["trace[487005176] 'agreement among raft nodes before linearized reading'  (duration: 122.306485ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-17T19:56:50.497447Z","caller":"traceutil/trace.go:172","msg":"trace[1186271414] transaction","detail":"{read_only:false; response_revision:386; number_of_response:1; }","duration":"321.169939ms","start":"2025-12-17T19:56:50.176257Z","end":"2025-12-17T19:56:50.497427Z","steps":["trace[1186271414] 'process raft request'  (duration: 321.038482ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-17T19:56:50.497556Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-12-17T19:56:50.176236Z","time spent":"321.257311ms","remote":"127.0.0.1:59298","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":5325,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/kindnet-z5f74\" mod_revision:380 > success:<request_put:<key:\"/registry/pods/kube-system/kindnet-z5f74\" value_size:5277 >> failure:<request_range:<key:\"/registry/pods/kube-system/kindnet-z5f74\" > >"}
	{"level":"info","ts":"2025-12-17T19:56:50.519550Z","caller":"traceutil/trace.go:172","msg":"trace[28871579] transaction","detail":"{read_only:false; response_revision:387; number_of_response:1; }","duration":"225.902922ms","start":"2025-12-17T19:56:50.293629Z","end":"2025-12-17T19:56:50.519532Z","steps":["trace[28871579] 'process raft request'  (duration: 225.802694ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-17T19:56:50.795054Z","caller":"traceutil/trace.go:172","msg":"trace[1099269762] transaction","detail":"{read_only:false; response_revision:388; number_of_response:1; }","duration":"256.014028ms","start":"2025-12-17T19:56:50.539019Z","end":"2025-12-17T19:56:50.795033Z","steps":["trace[1099269762] 'process raft request'  (duration: 211.636548ms)","trace[1099269762] 'compare'  (duration: 44.08424ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-17T19:56:50.795473Z","caller":"traceutil/trace.go:172","msg":"trace[781548046] transaction","detail":"{read_only:false; response_revision:389; number_of_response:1; }","duration":"161.911211ms","start":"2025-12-17T19:56:50.633549Z","end":"2025-12-17T19:56:50.795460Z","steps":["trace[781548046] 'process raft request'  (duration: 161.853445ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-17T19:56:51.111584Z","caller":"traceutil/trace.go:172","msg":"trace[285237607] transaction","detail":"{read_only:false; response_revision:395; number_of_response:1; }","duration":"244.95447ms","start":"2025-12-17T19:56:50.866613Z","end":"2025-12-17T19:56:51.111568Z","steps":["trace[285237607] 'process raft request'  (duration: 244.881218ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-17T19:56:51.111810Z","caller":"traceutil/trace.go:172","msg":"trace[1348301164] transaction","detail":"{read_only:false; response_revision:394; number_of_response:1; }","duration":"273.737769ms","start":"2025-12-17T19:56:50.838054Z","end":"2025-12-17T19:56:51.111791Z","steps":["trace[1348301164] 'process raft request'  (duration: 272.590905ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-17T19:56:51.271073Z","caller":"traceutil/trace.go:172","msg":"trace[182069330] linearizableReadLoop","detail":"{readStateIndex:406; appliedIndex:406; }","duration":"160.622163ms","start":"2025-12-17T19:56:51.110429Z","end":"2025-12-17T19:56:51.271051Z","steps":["trace[182069330] 'read index received'  (duration: 160.613742ms)","trace[182069330] 'applied index is now lower than readState.Index'  (duration: 6.98µs)"],"step_count":2}
	{"level":"warn","ts":"2025-12-17T19:56:51.277929Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"247.246211ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/deployments/kube-system/coredns\" limit:1 ","response":"range_response_count:1 size:4400"}
	{"level":"info","ts":"2025-12-17T19:56:51.277982Z","caller":"traceutil/trace.go:172","msg":"trace[1236643297] transaction","detail":"{read_only:false; response_revision:398; number_of_response:1; }","duration":"162.389749ms","start":"2025-12-17T19:56:51.115579Z","end":"2025-12-17T19:56:51.277969Z","steps":["trace[1236643297] 'process raft request'  (duration: 162.354958ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-17T19:56:51.277980Z","caller":"traceutil/trace.go:172","msg":"trace[1605287942] transaction","detail":"{read_only:false; response_revision:396; number_of_response:1; }","duration":"391.463699ms","start":"2025-12-17T19:56:50.886494Z","end":"2025-12-17T19:56:51.277958Z","steps":["trace[1605287942] 'process raft request'  (duration: 384.65119ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-17T19:56:51.277994Z","caller":"traceutil/trace.go:172","msg":"trace[347070194] range","detail":"{range_begin:/registry/deployments/kube-system/coredns; range_end:; response_count:1; response_revision:395; }","duration":"247.330995ms","start":"2025-12-17T19:56:51.030655Z","end":"2025-12-17T19:56:51.277986Z","steps":["trace[347070194] 'agreement among raft nodes before linearized reading'  (duration: 240.463836ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-17T19:56:51.278119Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-12-17T19:56:50.886468Z","time spent":"391.567455ms","remote":"127.0.0.1:59792","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":4385,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/deployments/kube-system/coredns\" mod_revision:393 > success:<request_put:<key:\"/registry/deployments/kube-system/coredns\" value_size:4336 >> failure:<request_range:<key:\"/registry/deployments/kube-system/coredns\" > >"}
	{"level":"info","ts":"2025-12-17T19:56:51.278138Z","caller":"traceutil/trace.go:172","msg":"trace[853644693] transaction","detail":"{read_only:false; number_of_response:1; response_revision:397; }","duration":"164.64451ms","start":"2025-12-17T19:56:51.113487Z","end":"2025-12-17T19:56:51.278131Z","steps":["trace[853644693] 'process raft request'  (duration: 164.402931ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-17T19:57:08.019169Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"220.571668ms","expected-duration":"100ms","prefix":"","request":"header:<ID:9722597791944128097 > lease_revoke:<id:06ed9b2de2d3b5db>","response":"size:28"}
	
	
	==> kernel <==
	 19:57:21 up  1:39,  0 user,  load average: 6.44, 2.61, 1.86
	Linux pause-318455 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [dece30b73bfce1ce557b5bbe5dbaf9154f600e34ba66b7ca4ca88e585241097c] <==
	I1217 19:56:53.479885       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1217 19:56:53.480294       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1217 19:56:53.480440       1 main.go:148] setting mtu 1500 for CNI 
	I1217 19:56:53.480461       1 main.go:178] kindnetd IP family: "ipv4"
	I1217 19:56:53.480480       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-17T19:56:53Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1217 19:56:53.684459       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1217 19:56:53.684495       1 controller.go:381] "Waiting for informer caches to sync"
	I1217 19:56:53.684550       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1217 19:56:53.684775       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1217 19:56:54.061840       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1217 19:56:54.061875       1 metrics.go:72] Registering metrics
	I1217 19:56:54.061946       1 controller.go:711] "Syncing nftables rules"
	I1217 19:57:03.692213       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1217 19:57:03.692290       1 main.go:301] handling current node
	I1217 19:57:13.691194       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1217 19:57:13.691240       1 main.go:301] handling current node
	
	
	==> kube-apiserver [2cee54e9215fa59351da49c19c47358d8bfa5c9c824fa627c1b9f685d24495b7] <==
	I1217 19:56:41.184898       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1217 19:56:41.184955       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1217 19:56:41.191508       1 controller.go:667] quota admission added evaluator for: namespaces
	I1217 19:56:41.210300       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1217 19:56:41.210515       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1217 19:56:41.217500       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1217 19:56:41.217921       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1217 19:56:41.353763       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1217 19:56:41.988515       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1217 19:56:41.994211       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1217 19:56:41.994237       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1217 19:56:42.473270       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1217 19:56:42.506950       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1217 19:56:42.590661       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1217 19:56:42.597508       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1217 19:56:42.598509       1 controller.go:667] quota admission added evaluator for: endpoints
	I1217 19:56:42.602296       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1217 19:56:43.097352       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1217 19:56:43.477915       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1217 19:56:43.490835       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1217 19:56:43.498949       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1217 19:56:48.799626       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1217 19:56:49.251552       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1217 19:56:49.257688       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1217 19:56:49.365607       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [12f0a8e54bc78853c3f054005a5648e352dda07cdc1713c286582320329e7057] <==
	I1217 19:56:48.103969       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1217 19:56:48.104774       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1217 19:56:48.110506       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1217 19:56:48.113328       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="pause-318455" podCIDRs=["10.244.0.0/24"]
	I1217 19:56:48.114655       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1217 19:56:48.115786       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1217 19:56:48.124174       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1217 19:56:48.145329       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1217 19:56:48.146512       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1217 19:56:48.146560       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1217 19:56:48.146615       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1217 19:56:48.146625       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1217 19:56:48.146665       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1217 19:56:48.146684       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1217 19:56:48.148226       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1217 19:56:48.148339       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1217 19:56:48.148361       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1217 19:56:48.149592       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1217 19:56:48.149613       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1217 19:56:48.150787       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1217 19:56:48.152371       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1217 19:56:48.155250       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1217 19:56:48.155262       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1217 19:56:48.158594       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1217 19:57:08.099434       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [19248a249a354c5c3da43d5ddc3ff65f75c61b9f2cab9913aab8d6492000822f] <==
	I1217 19:56:51.319152       1 server_linux.go:53] "Using iptables proxy"
	I1217 19:56:51.382519       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1217 19:56:51.483330       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1217 19:56:51.483387       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1217 19:56:51.483502       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1217 19:56:51.508825       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1217 19:56:51.508895       1 server_linux.go:132] "Using iptables Proxier"
	I1217 19:56:51.515489       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1217 19:56:51.515962       1 server.go:527] "Version info" version="v1.34.3"
	I1217 19:56:51.516390       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1217 19:56:51.519049       1 config.go:106] "Starting endpoint slice config controller"
	I1217 19:56:51.519336       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1217 19:56:51.519154       1 config.go:403] "Starting serviceCIDR config controller"
	I1217 19:56:51.519370       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1217 19:56:51.519204       1 config.go:200] "Starting service config controller"
	I1217 19:56:51.519384       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1217 19:56:51.519228       1 config.go:309] "Starting node config controller"
	I1217 19:56:51.519396       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1217 19:56:51.620329       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1217 19:56:51.620367       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1217 19:56:51.620368       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1217 19:56:51.620378       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [ede91caa7f2fcc03537da65481e4d60d4a910e278cfbc996cd09ccdce85e42af] <==
	E1217 19:56:41.170811       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1217 19:56:41.172214       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1217 19:56:41.172374       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1217 19:56:41.175514       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1217 19:56:41.175699       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1217 19:56:41.175784       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1217 19:56:41.176004       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1217 19:56:41.176011       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1217 19:56:41.176114       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1217 19:56:41.176203       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1217 19:56:41.176233       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1217 19:56:41.176286       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1217 19:56:41.176301       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1217 19:56:41.176354       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1217 19:56:41.176363       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1217 19:56:41.176482       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1217 19:56:41.177412       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1217 19:56:41.177861       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1217 19:56:42.000654       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1217 19:56:42.023953       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1217 19:56:42.112328       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1217 19:56:42.234511       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1217 19:56:42.280027       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1217 19:56:42.287228       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	I1217 19:56:42.765170       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 17 19:56:44 pause-318455 kubelet[1317]: E1217 19:56:44.372231    1317 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"etcd-pause-318455\" already exists" pod="kube-system/etcd-pause-318455"
	Dec 17 19:56:44 pause-318455 kubelet[1317]: I1217 19:56:44.455376    1317 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-pause-318455" podStartSLOduration=1.455351497 podStartE2EDuration="1.455351497s" podCreationTimestamp="2025-12-17 19:56:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-17 19:56:44.433324652 +0000 UTC m=+1.185729850" watchObservedRunningTime="2025-12-17 19:56:44.455351497 +0000 UTC m=+1.207756691"
	Dec 17 19:56:44 pause-318455 kubelet[1317]: I1217 19:56:44.467067    1317 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-pause-318455" podStartSLOduration=1.467042516 podStartE2EDuration="1.467042516s" podCreationTimestamp="2025-12-17 19:56:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-17 19:56:44.466938778 +0000 UTC m=+1.219344006" watchObservedRunningTime="2025-12-17 19:56:44.467042516 +0000 UTC m=+1.219447706"
	Dec 17 19:56:44 pause-318455 kubelet[1317]: I1217 19:56:44.467306    1317 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-pause-318455" podStartSLOduration=1.467297263 podStartE2EDuration="1.467297263s" podCreationTimestamp="2025-12-17 19:56:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-17 19:56:44.455739324 +0000 UTC m=+1.208144541" watchObservedRunningTime="2025-12-17 19:56:44.467297263 +0000 UTC m=+1.219702460"
	Dec 17 19:56:44 pause-318455 kubelet[1317]: I1217 19:56:44.493869    1317 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-pause-318455" podStartSLOduration=1.493842113 podStartE2EDuration="1.493842113s" podCreationTimestamp="2025-12-17 19:56:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-17 19:56:44.480271519 +0000 UTC m=+1.232676734" watchObservedRunningTime="2025-12-17 19:56:44.493842113 +0000 UTC m=+1.246247308"
	Dec 17 19:56:48 pause-318455 kubelet[1317]: I1217 19:56:48.134336    1317 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Dec 17 19:56:48 pause-318455 kubelet[1317]: I1217 19:56:48.135180    1317 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Dec 17 19:56:49 pause-318455 kubelet[1317]: I1217 19:56:49.768362    1317 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/684ab215-b5a6-44fe-a4f6-fae57853d3c4-xtables-lock\") pod \"kube-proxy-48bqr\" (UID: \"684ab215-b5a6-44fe-a4f6-fae57853d3c4\") " pod="kube-system/kube-proxy-48bqr"
	Dec 17 19:56:49 pause-318455 kubelet[1317]: I1217 19:56:49.768434    1317 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/684ab215-b5a6-44fe-a4f6-fae57853d3c4-kube-proxy\") pod \"kube-proxy-48bqr\" (UID: \"684ab215-b5a6-44fe-a4f6-fae57853d3c4\") " pod="kube-system/kube-proxy-48bqr"
	Dec 17 19:56:49 pause-318455 kubelet[1317]: I1217 19:56:49.768462    1317 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/684ab215-b5a6-44fe-a4f6-fae57853d3c4-lib-modules\") pod \"kube-proxy-48bqr\" (UID: \"684ab215-b5a6-44fe-a4f6-fae57853d3c4\") " pod="kube-system/kube-proxy-48bqr"
	Dec 17 19:56:49 pause-318455 kubelet[1317]: I1217 19:56:49.768488    1317 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nkfz8\" (UniqueName: \"kubernetes.io/projected/684ab215-b5a6-44fe-a4f6-fae57853d3c4-kube-api-access-nkfz8\") pod \"kube-proxy-48bqr\" (UID: \"684ab215-b5a6-44fe-a4f6-fae57853d3c4\") " pod="kube-system/kube-proxy-48bqr"
	Dec 17 19:56:50 pause-318455 kubelet[1317]: I1217 19:56:50.272588    1317 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/2db52b2f-fdbb-4ede-a88c-ca7bf3d7e916-cni-cfg\") pod \"kindnet-z5f74\" (UID: \"2db52b2f-fdbb-4ede-a88c-ca7bf3d7e916\") " pod="kube-system/kindnet-z5f74"
	Dec 17 19:56:50 pause-318455 kubelet[1317]: I1217 19:56:50.272652    1317 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2db52b2f-fdbb-4ede-a88c-ca7bf3d7e916-xtables-lock\") pod \"kindnet-z5f74\" (UID: \"2db52b2f-fdbb-4ede-a88c-ca7bf3d7e916\") " pod="kube-system/kindnet-z5f74"
	Dec 17 19:56:50 pause-318455 kubelet[1317]: I1217 19:56:50.272683    1317 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fnkfl\" (UniqueName: \"kubernetes.io/projected/2db52b2f-fdbb-4ede-a88c-ca7bf3d7e916-kube-api-access-fnkfl\") pod \"kindnet-z5f74\" (UID: \"2db52b2f-fdbb-4ede-a88c-ca7bf3d7e916\") " pod="kube-system/kindnet-z5f74"
	Dec 17 19:56:50 pause-318455 kubelet[1317]: I1217 19:56:50.272704    1317 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2db52b2f-fdbb-4ede-a88c-ca7bf3d7e916-lib-modules\") pod \"kindnet-z5f74\" (UID: \"2db52b2f-fdbb-4ede-a88c-ca7bf3d7e916\") " pod="kube-system/kindnet-z5f74"
	Dec 17 19:56:51 pause-318455 kubelet[1317]: I1217 19:56:51.409103    1317 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-48bqr" podStartSLOduration=2.40906253 podStartE2EDuration="2.40906253s" podCreationTimestamp="2025-12-17 19:56:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-17 19:56:51.408682999 +0000 UTC m=+8.161088206" watchObservedRunningTime="2025-12-17 19:56:51.40906253 +0000 UTC m=+8.161467728"
	Dec 17 19:56:53 pause-318455 kubelet[1317]: I1217 19:56:53.400321    1317 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-z5f74" podStartSLOduration=2.071066059 podStartE2EDuration="4.400299746s" podCreationTimestamp="2025-12-17 19:56:49 +0000 UTC" firstStartedPulling="2025-12-17 19:56:50.864320375 +0000 UTC m=+7.616725554" lastFinishedPulling="2025-12-17 19:56:53.193554067 +0000 UTC m=+9.945959241" observedRunningTime="2025-12-17 19:56:53.400157635 +0000 UTC m=+10.152562852" watchObservedRunningTime="2025-12-17 19:56:53.400299746 +0000 UTC m=+10.152704952"
	Dec 17 19:57:04 pause-318455 kubelet[1317]: I1217 19:57:04.246427    1317 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Dec 17 19:57:04 pause-318455 kubelet[1317]: I1217 19:57:04.372609    1317 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4fgfq\" (UniqueName: \"kubernetes.io/projected/01975478-9e8c-4475-b2d0-82166c6a60a4-kube-api-access-4fgfq\") pod \"coredns-66bc5c9577-l2sfj\" (UID: \"01975478-9e8c-4475-b2d0-82166c6a60a4\") " pod="kube-system/coredns-66bc5c9577-l2sfj"
	Dec 17 19:57:04 pause-318455 kubelet[1317]: I1217 19:57:04.372684    1317 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/01975478-9e8c-4475-b2d0-82166c6a60a4-config-volume\") pod \"coredns-66bc5c9577-l2sfj\" (UID: \"01975478-9e8c-4475-b2d0-82166c6a60a4\") " pod="kube-system/coredns-66bc5c9577-l2sfj"
	Dec 17 19:57:05 pause-318455 kubelet[1317]: I1217 19:57:05.440816    1317 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-l2sfj" podStartSLOduration=16.440780819 podStartE2EDuration="16.440780819s" podCreationTimestamp="2025-12-17 19:56:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-17 19:57:05.440408435 +0000 UTC m=+22.192813647" watchObservedRunningTime="2025-12-17 19:57:05.440780819 +0000 UTC m=+22.193186015"
	Dec 17 19:57:15 pause-318455 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 17 19:57:15 pause-318455 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 17 19:57:15 pause-318455 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 19:57:15 pause-318455 systemd[1]: kubelet.service: Consumed 1.527s CPU time.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-318455 -n pause-318455
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-318455 -n pause-318455: exit status 2 (368.106102ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context pause-318455 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/Pause (6.43s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (2.38s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-832842 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p no-preload-832842 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (261.485104ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T20:00:01Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p no-preload-832842 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-832842 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context no-preload-832842 describe deploy/metrics-server -n kube-system: exit status 1 (60.198928ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context no-preload-832842 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect no-preload-832842
helpers_test.go:244: (dbg) docker inspect no-preload-832842:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "dc205de21d84136a9158f48e22680e3a6dbeb7058d8f7cb8a1ec42b2ab7078c4",
	        "Created": "2025-12-17T19:59:10.833809324Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 613708,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-17T19:59:10.869024077Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:e3abeb065413b7566dd42e98e204ab3ad174790743f1f5cd427036c11b49d7f1",
	        "ResolvConfPath": "/var/lib/docker/containers/dc205de21d84136a9158f48e22680e3a6dbeb7058d8f7cb8a1ec42b2ab7078c4/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/dc205de21d84136a9158f48e22680e3a6dbeb7058d8f7cb8a1ec42b2ab7078c4/hostname",
	        "HostsPath": "/var/lib/docker/containers/dc205de21d84136a9158f48e22680e3a6dbeb7058d8f7cb8a1ec42b2ab7078c4/hosts",
	        "LogPath": "/var/lib/docker/containers/dc205de21d84136a9158f48e22680e3a6dbeb7058d8f7cb8a1ec42b2ab7078c4/dc205de21d84136a9158f48e22680e3a6dbeb7058d8f7cb8a1ec42b2ab7078c4-json.log",
	        "Name": "/no-preload-832842",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-832842:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "no-preload-832842",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "dc205de21d84136a9158f48e22680e3a6dbeb7058d8f7cb8a1ec42b2ab7078c4",
	                "LowerDir": "/var/lib/docker/overlay2/ebb0d0f911a75643e43d20c434d6ce8701dfed1b02452ca7b47f96286ae91c9a-init/diff:/var/lib/docker/overlay2/29727d664a8119dcd8d22d923cfdfa7d86f99088879bf2a113d907b51116eb38/diff",
	                "MergedDir": "/var/lib/docker/overlay2/ebb0d0f911a75643e43d20c434d6ce8701dfed1b02452ca7b47f96286ae91c9a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/ebb0d0f911a75643e43d20c434d6ce8701dfed1b02452ca7b47f96286ae91c9a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/ebb0d0f911a75643e43d20c434d6ce8701dfed1b02452ca7b47f96286ae91c9a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-832842",
	                "Source": "/var/lib/docker/volumes/no-preload-832842/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-832842",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-832842",
	                "name.minikube.sigs.k8s.io": "no-preload-832842",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "f39bd4a90e6d09f64c56d64ce5c8dc7e6bae0ff6aa34236a393c5ee2dbdcc02c",
	            "SandboxKey": "/var/run/docker/netns/f39bd4a90e6d",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33438"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33439"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33442"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33440"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33441"
	                    }
	                ]
	            },
	            "Networks": {
	                "no-preload-832842": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "a19db78cafed3da0943e15828af72c0aafbad853d47090363f5479ad475afe12",
	                    "EndpointID": "5b183239c007ea792fc4633ed270c9912096ef72dc8f0e04828c390ff9f88ee4",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "MacAddress": "92:89:e7:a8:b5:c7",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-832842",
	                        "dc205de21d84"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-832842 -n no-preload-832842
helpers_test.go:253: <<< TestStartStop/group/no-preload/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-832842 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p no-preload-832842 logs -n 25: (1.112048164s)
helpers_test.go:261: TestStartStop/group/no-preload/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ ssh     │ -p cilium-601560 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                                                                                                │ cilium-601560                │ jenkins │ v1.37.0 │ 17 Dec 25 19:58 UTC │                     │
	│ ssh     │ -p cilium-601560 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                                                                                          │ cilium-601560                │ jenkins │ v1.37.0 │ 17 Dec 25 19:58 UTC │                     │
	│ ssh     │ -p cilium-601560 sudo cri-dockerd --version                                                                                                                                                                                                   │ cilium-601560                │ jenkins │ v1.37.0 │ 17 Dec 25 19:58 UTC │                     │
	│ ssh     │ -p cilium-601560 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                                     │ cilium-601560                │ jenkins │ v1.37.0 │ 17 Dec 25 19:58 UTC │                     │
	│ ssh     │ -p cilium-601560 sudo systemctl cat containerd --no-pager                                                                                                                                                                                     │ cilium-601560                │ jenkins │ v1.37.0 │ 17 Dec 25 19:58 UTC │                     │
	│ ssh     │ -p cilium-601560 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                              │ cilium-601560                │ jenkins │ v1.37.0 │ 17 Dec 25 19:58 UTC │                     │
	│ ssh     │ -p cilium-601560 sudo cat /etc/containerd/config.toml                                                                                                                                                                                         │ cilium-601560                │ jenkins │ v1.37.0 │ 17 Dec 25 19:58 UTC │                     │
	│ ssh     │ -p cilium-601560 sudo containerd config dump                                                                                                                                                                                                  │ cilium-601560                │ jenkins │ v1.37.0 │ 17 Dec 25 19:58 UTC │                     │
	│ ssh     │ -p cilium-601560 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ cilium-601560                │ jenkins │ v1.37.0 │ 17 Dec 25 19:58 UTC │                     │
	│ ssh     │ -p cilium-601560 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ cilium-601560                │ jenkins │ v1.37.0 │ 17 Dec 25 19:58 UTC │                     │
	│ ssh     │ -p cilium-601560 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ cilium-601560                │ jenkins │ v1.37.0 │ 17 Dec 25 19:58 UTC │                     │
	│ ssh     │ -p cilium-601560 sudo crio config                                                                                                                                                                                                             │ cilium-601560                │ jenkins │ v1.37.0 │ 17 Dec 25 19:58 UTC │                     │
	│ delete  │ -p cilium-601560                                                                                                                                                                                                                              │ cilium-601560                │ jenkins │ v1.37.0 │ 17 Dec 25 19:58 UTC │ 17 Dec 25 19:58 UTC │
	│ start   │ -p cert-options-997440 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-997440          │ jenkins │ v1.37.0 │ 17 Dec 25 19:58 UTC │ 17 Dec 25 19:59 UTC │
	│ stop    │ -p NoKubernetes-327438                                                                                                                                                                                                                        │ NoKubernetes-327438          │ jenkins │ v1.37.0 │ 17 Dec 25 19:58 UTC │ 17 Dec 25 19:58 UTC │
	│ start   │ -p NoKubernetes-327438 --driver=docker  --container-runtime=crio                                                                                                                                                                              │ NoKubernetes-327438          │ jenkins │ v1.37.0 │ 17 Dec 25 19:58 UTC │ 17 Dec 25 19:59 UTC │
	│ ssh     │ cert-options-997440 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-997440          │ jenkins │ v1.37.0 │ 17 Dec 25 19:59 UTC │ 17 Dec 25 19:59 UTC │
	│ ssh     │ -p cert-options-997440 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-997440          │ jenkins │ v1.37.0 │ 17 Dec 25 19:59 UTC │ 17 Dec 25 19:59 UTC │
	│ delete  │ -p cert-options-997440                                                                                                                                                                                                                        │ cert-options-997440          │ jenkins │ v1.37.0 │ 17 Dec 25 19:59 UTC │ 17 Dec 25 19:59 UTC │
	│ ssh     │ -p NoKubernetes-327438 sudo systemctl is-active --quiet service kubelet                                                                                                                                                                       │ NoKubernetes-327438          │ jenkins │ v1.37.0 │ 17 Dec 25 19:59 UTC │                     │
	│ start   │ -p old-k8s-version-894575 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-894575       │ jenkins │ v1.37.0 │ 17 Dec 25 19:59 UTC │ 17 Dec 25 19:59 UTC │
	│ delete  │ -p NoKubernetes-327438                                                                                                                                                                                                                        │ NoKubernetes-327438          │ jenkins │ v1.37.0 │ 17 Dec 25 19:59 UTC │ 17 Dec 25 19:59 UTC │
	│ delete  │ -p disable-driver-mounts-890254                                                                                                                                                                                                               │ disable-driver-mounts-890254 │ jenkins │ v1.37.0 │ 17 Dec 25 19:59 UTC │ 17 Dec 25 19:59 UTC │
	│ start   │ -p no-preload-832842 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1                                                                                  │ no-preload-832842            │ jenkins │ v1.37.0 │ 17 Dec 25 19:59 UTC │ 17 Dec 25 19:59 UTC │
	│ addons  │ enable metrics-server -p no-preload-832842 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-832842            │ jenkins │ v1.37.0 │ 17 Dec 25 20:00 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/17 19:59:07
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1217 19:59:07.163690  613002 out.go:360] Setting OutFile to fd 1 ...
	I1217 19:59:07.163841  613002 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 19:59:07.163849  613002 out.go:374] Setting ErrFile to fd 2...
	I1217 19:59:07.163855  613002 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 19:59:07.164194  613002 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22186-372245/.minikube/bin
	I1217 19:59:07.164836  613002 out.go:368] Setting JSON to false
	I1217 19:59:07.166288  613002 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":6098,"bootTime":1765995449,"procs":289,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1217 19:59:07.166372  613002 start.go:143] virtualization: kvm guest
	I1217 19:59:07.171555  613002 out.go:179] * [no-preload-832842] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1217 19:59:07.173234  613002 notify.go:221] Checking for updates...
	I1217 19:59:07.173302  613002 out.go:179]   - MINIKUBE_LOCATION=22186
	I1217 19:59:07.174663  613002 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 19:59:07.179613  613002 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22186-372245/kubeconfig
	I1217 19:59:07.181140  613002 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22186-372245/.minikube
	I1217 19:59:07.182442  613002 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1217 19:59:07.183702  613002 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 19:59:02.464936  596882 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1217 19:59:02.465384  596882 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 19:59:02.965029  596882 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1217 19:59:02.965493  596882 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 19:59:03.465143  596882 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1217 19:59:03.465610  596882 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 19:59:03.965216  596882 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1217 19:59:03.965667  596882 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 19:59:04.464948  596882 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1217 19:59:04.465470  596882 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 19:59:04.964954  596882 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1217 19:59:04.965420  596882 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 19:59:05.465113  596882 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1217 19:59:05.465545  596882 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 19:59:05.965163  596882 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1217 19:59:05.965549  596882 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 19:59:06.465209  596882 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1217 19:59:06.465648  596882 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 19:59:06.965176  596882 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1217 19:59:06.965625  596882 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 19:59:07.189252  613002 config.go:182] Loaded profile config "cert-expiration-059470": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 19:59:07.189411  613002 config.go:182] Loaded profile config "kubernetes-upgrade-322567": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1217 19:59:07.189554  613002 config.go:182] Loaded profile config "old-k8s-version-894575": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1217 19:59:07.189708  613002 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 19:59:07.217800  613002 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1217 19:59:07.217948  613002 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 19:59:07.283633  613002 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:62 OomKillDisable:false NGoroutines:81 SystemTime:2025-12-17 19:59:07.2713645 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map
[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1217 19:59:07.283787  613002 docker.go:319] overlay module found
	I1217 19:59:07.285672  613002 out.go:179] * Using the docker driver based on user configuration
	I1217 19:59:07.286708  613002 start.go:309] selected driver: docker
	I1217 19:59:07.286724  613002 start.go:927] validating driver "docker" against <nil>
	I1217 19:59:07.286738  613002 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 19:59:07.287471  613002 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 19:59:07.346850  613002 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:62 OomKillDisable:false NGoroutines:81 SystemTime:2025-12-17 19:59:07.336017187 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1217 19:59:07.347017  613002 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1217 19:59:07.347269  613002 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1217 19:59:07.348945  613002 out.go:179] * Using Docker driver with root privileges
	I1217 19:59:07.350157  613002 cni.go:84] Creating CNI manager for ""
	I1217 19:59:07.350254  613002 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1217 19:59:07.350270  613002 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1217 19:59:07.350385  613002 start.go:353] cluster config:
	{Name:no-preload-832842 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:no-preload-832842 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: S
SHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 19:59:07.351786  613002 out.go:179] * Starting "no-preload-832842" primary control-plane node in "no-preload-832842" cluster
	I1217 19:59:07.352940  613002 cache.go:134] Beginning downloading kic base image for docker with crio
	I1217 19:59:07.354124  613002 out.go:179] * Pulling base image v0.0.48-1765966054-22186 ...
	I1217 19:59:07.355110  613002 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime crio
	I1217 19:59:07.355201  613002 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 in local docker daemon
	I1217 19:59:07.355218  613002 profile.go:143] Saving config to /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/no-preload-832842/config.json ...
	I1217 19:59:07.355251  613002 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/no-preload-832842/config.json: {Name:mke41a27585b2fe600b2f3d48e81fa7a9c8fa347 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 19:59:07.355399  613002 cache.go:107] acquiring lock: {Name:mkcbe01b68b1228540a4060035e71f760b6eb215 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 19:59:07.355397  613002 cache.go:107] acquiring lock: {Name:mkf47f2e6c696152682e65be33119c2f43b3bb74 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 19:59:07.355439  613002 cache.go:107] acquiring lock: {Name:mk771abb5794f06a8d4c1ae0daf61ddb16c9a0d5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 19:59:07.355460  613002 cache.go:107] acquiring lock: {Name:mk74d4b3a0b59766e169c7e12524465d5725aec1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 19:59:07.355486  613002 cache.go:115] /home/jenkins/minikube-integration/22186-372245/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1217 19:59:07.355497  613002 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/22186-372245/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 105.388µs
	I1217 19:59:07.355515  613002 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/22186-372245/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1217 19:59:07.355491  613002 cache.go:107] acquiring lock: {Name:mk098c0851fafa2f04384b394b02f76db8624c86 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 19:59:07.355531  613002 cache.go:107] acquiring lock: {Name:mk151219bf56732e207466095277e35e24e25e44 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 19:59:07.355537  613002 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	I1217 19:59:07.355551  613002 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.6-0
	I1217 19:59:07.355571  613002 cache.go:107] acquiring lock: {Name:mk3531dda110c99b8d236ae9f26b1d573c3696cc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 19:59:07.355635  613002 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	I1217 19:59:07.355672  613002 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	I1217 19:59:07.355684  613002 cache.go:115] /home/jenkins/minikube-integration/22186-372245/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 exists
	I1217 19:59:07.355665  613002 cache.go:107] acquiring lock: {Name:mk1bb362c47f07be5bf19f353c27e03a385bbbad Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 19:59:07.355695  613002 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/22186-372245/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1" took 171.567µs
	I1217 19:59:07.355713  613002 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/22186-372245/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 succeeded
	I1217 19:59:07.355847  613002 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.13.1
	I1217 19:59:07.356023  613002 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.35.0-rc.1
	I1217 19:59:07.356947  613002 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.6-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.6-0
	I1217 19:59:07.356959  613002 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.35.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	I1217 19:59:07.356959  613002 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.35.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.35.0-rc.1
	I1217 19:59:07.356966  613002 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.13.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.13.1
	I1217 19:59:07.356959  613002 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.35.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	I1217 19:59:07.356947  613002 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.35.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	I1217 19:59:07.379591  613002 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 in local docker daemon, skipping pull
	I1217 19:59:07.379613  613002 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 exists in daemon, skipping load
	I1217 19:59:07.379629  613002 cache.go:243] Successfully downloaded all kic artifacts
	I1217 19:59:07.379664  613002 start.go:360] acquireMachinesLock for no-preload-832842: {Name:mka72685b85221388ed3605f67ec1d1d5d2a5266 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 19:59:07.379771  613002 start.go:364] duration metric: took 83.12µs to acquireMachinesLock for "no-preload-832842"
	I1217 19:59:07.379803  613002 start.go:93] Provisioning new machine with config: &{Name:no-preload-832842 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:no-preload-832842 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Custom
QemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1217 19:59:07.379894  613002 start.go:125] createHost starting for "" (driver="docker")
	I1217 19:59:05.082608  612025 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1217 19:59:05.082874  612025 start.go:159] libmachine.API.Create for "old-k8s-version-894575" (driver="docker")
	I1217 19:59:05.082908  612025 client.go:173] LocalClient.Create starting
	I1217 19:59:05.082981  612025 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22186-372245/.minikube/certs/ca.pem
	I1217 19:59:05.083017  612025 main.go:143] libmachine: Decoding PEM data...
	I1217 19:59:05.083035  612025 main.go:143] libmachine: Parsing certificate...
	I1217 19:59:05.083123  612025 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22186-372245/.minikube/certs/cert.pem
	I1217 19:59:05.083148  612025 main.go:143] libmachine: Decoding PEM data...
	I1217 19:59:05.083178  612025 main.go:143] libmachine: Parsing certificate...
	I1217 19:59:05.083543  612025 cli_runner.go:164] Run: docker network inspect old-k8s-version-894575 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1217 19:59:05.102177  612025 cli_runner.go:211] docker network inspect old-k8s-version-894575 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1217 19:59:05.102261  612025 network_create.go:284] running [docker network inspect old-k8s-version-894575] to gather additional debugging logs...
	I1217 19:59:05.102288  612025 cli_runner.go:164] Run: docker network inspect old-k8s-version-894575
	W1217 19:59:05.120764  612025 cli_runner.go:211] docker network inspect old-k8s-version-894575 returned with exit code 1
	I1217 19:59:05.120819  612025 network_create.go:287] error running [docker network inspect old-k8s-version-894575]: docker network inspect old-k8s-version-894575: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network old-k8s-version-894575 not found
	I1217 19:59:05.120840  612025 network_create.go:289] output of [docker network inspect old-k8s-version-894575]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network old-k8s-version-894575 not found
	
	** /stderr **
	I1217 19:59:05.121048  612025 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1217 19:59:05.140594  612025 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-f64340259533 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:f6:0a:32:70:0d:35} reservation:<nil>}
	I1217 19:59:05.141466  612025 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-67abe6566c60 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:42:82:43:08:7c:e3} reservation:<nil>}
	I1217 19:59:05.141969  612025 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-f76d03f2ebfd IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:8e:bb:9b:fb:af:46} reservation:<nil>}
	I1217 19:59:05.142694  612025 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-4c731e2a052d IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:4e:e6:a7:52:2c:69} reservation:<nil>}
	I1217 19:59:05.143859  612025 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001ea19e0}
	I1217 19:59:05.143893  612025 network_create.go:124] attempt to create docker network old-k8s-version-894575 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1217 19:59:05.143953  612025 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=old-k8s-version-894575 old-k8s-version-894575
	I1217 19:59:05.198298  612025 network_create.go:108] docker network old-k8s-version-894575 192.168.85.0/24 created
	I1217 19:59:05.198337  612025 kic.go:121] calculated static IP "192.168.85.2" for the "old-k8s-version-894575" container
	I1217 19:59:05.198452  612025 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1217 19:59:05.216965  612025 cli_runner.go:164] Run: docker volume create old-k8s-version-894575 --label name.minikube.sigs.k8s.io=old-k8s-version-894575 --label created_by.minikube.sigs.k8s.io=true
	I1217 19:59:05.236367  612025 oci.go:103] Successfully created a docker volume old-k8s-version-894575
	I1217 19:59:05.236468  612025 cli_runner.go:164] Run: docker run --rm --name old-k8s-version-894575-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-894575 --entrypoint /usr/bin/test -v old-k8s-version-894575:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 -d /var/lib
	I1217 19:59:05.657528  612025 oci.go:107] Successfully prepared a docker volume old-k8s-version-894575
	I1217 19:59:05.657610  612025 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1217 19:59:05.657626  612025 kic.go:194] Starting extracting preloaded images to volume ...
	I1217 19:59:05.657688  612025 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22186-372245/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-894575:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 -I lz4 -xf /preloaded.tar -C /extractDir
	I1217 19:59:07.382335  613002 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1217 19:59:07.382587  613002 start.go:159] libmachine.API.Create for "no-preload-832842" (driver="docker")
	I1217 19:59:07.382621  613002 client.go:173] LocalClient.Create starting
	I1217 19:59:07.382683  613002 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22186-372245/.minikube/certs/ca.pem
	I1217 19:59:07.382719  613002 main.go:143] libmachine: Decoding PEM data...
	I1217 19:59:07.382742  613002 main.go:143] libmachine: Parsing certificate...
	I1217 19:59:07.382824  613002 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22186-372245/.minikube/certs/cert.pem
	I1217 19:59:07.382863  613002 main.go:143] libmachine: Decoding PEM data...
	I1217 19:59:07.382878  613002 main.go:143] libmachine: Parsing certificate...
	I1217 19:59:07.383332  613002 cli_runner.go:164] Run: docker network inspect no-preload-832842 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1217 19:59:07.403140  613002 cli_runner.go:211] docker network inspect no-preload-832842 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1217 19:59:07.403234  613002 network_create.go:284] running [docker network inspect no-preload-832842] to gather additional debugging logs...
	I1217 19:59:07.403256  613002 cli_runner.go:164] Run: docker network inspect no-preload-832842
	W1217 19:59:07.421450  613002 cli_runner.go:211] docker network inspect no-preload-832842 returned with exit code 1
	I1217 19:59:07.421491  613002 network_create.go:287] error running [docker network inspect no-preload-832842]: docker network inspect no-preload-832842: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network no-preload-832842 not found
	I1217 19:59:07.421503  613002 network_create.go:289] output of [docker network inspect no-preload-832842]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network no-preload-832842 not found
	
	** /stderr **
	I1217 19:59:07.421583  613002 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1217 19:59:07.442988  613002 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-f64340259533 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:f6:0a:32:70:0d:35} reservation:<nil>}
	I1217 19:59:07.443845  613002 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-67abe6566c60 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:42:82:43:08:7c:e3} reservation:<nil>}
	I1217 19:59:07.444368  613002 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-f76d03f2ebfd IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:8e:bb:9b:fb:af:46} reservation:<nil>}
	I1217 19:59:07.444869  613002 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-4c731e2a052d IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:4e:e6:a7:52:2c:69} reservation:<nil>}
	I1217 19:59:07.445506  613002 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-f0ce1019d985 IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:26:5a:f7:51:9a:55} reservation:<nil>}
	I1217 19:59:07.445916  613002 network.go:211] skipping subnet 192.168.94.0/24 that is taken: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName:br-a8fdc05f236b IfaceIPv4:192.168.94.1 IfaceMTU:1500 IfaceMAC:3e:59:80:d3:98:cc} reservation:<nil>}
	I1217 19:59:07.446618  613002 network.go:206] using free private subnet 192.168.103.0/24: &{IP:192.168.103.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.103.0/24 Gateway:192.168.103.1 ClientMin:192.168.103.2 ClientMax:192.168.103.254 Broadcast:192.168.103.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001789860}
	I1217 19:59:07.446639  613002 network_create.go:124] attempt to create docker network no-preload-832842 192.168.103.0/24 with gateway 192.168.103.1 and MTU of 1500 ...
	I1217 19:59:07.446685  613002 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.103.0/24 --gateway=192.168.103.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=no-preload-832842 no-preload-832842
	I1217 19:59:07.497988  613002 network_create.go:108] docker network no-preload-832842 192.168.103.0/24 created
	I1217 19:59:07.498030  613002 kic.go:121] calculated static IP "192.168.103.2" for the "no-preload-832842" container
	I1217 19:59:07.498167  613002 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1217 19:59:07.504784  613002 cache.go:162] opening:  /home/jenkins/minikube-integration/22186-372245/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1
	I1217 19:59:07.508291  613002 cache.go:162] opening:  /home/jenkins/minikube-integration/22186-372245/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-rc.1
	I1217 19:59:07.513354  613002 cache.go:162] opening:  /home/jenkins/minikube-integration/22186-372245/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-rc.1
	I1217 19:59:07.517929  613002 cli_runner.go:164] Run: docker volume create no-preload-832842 --label name.minikube.sigs.k8s.io=no-preload-832842 --label created_by.minikube.sigs.k8s.io=true
	I1217 19:59:07.521995  613002 cache.go:162] opening:  /home/jenkins/minikube-integration/22186-372245/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-rc.1
	I1217 19:59:07.522679  613002 cache.go:162] opening:  /home/jenkins/minikube-integration/22186-372245/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.6-0
	I1217 19:59:07.525138  613002 cache.go:162] opening:  /home/jenkins/minikube-integration/22186-372245/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-rc.1
	I1217 19:59:07.537568  613002 oci.go:103] Successfully created a docker volume no-preload-832842
	I1217 19:59:07.537635  613002 cli_runner.go:164] Run: docker run --rm --name no-preload-832842-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-832842 --entrypoint /usr/bin/test -v no-preload-832842:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 -d /var/lib
	I1217 19:59:07.886133  613002 cache.go:157] /home/jenkins/minikube-integration/22186-372245/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-rc.1 exists
	I1217 19:59:07.886160  613002 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-rc.1" -> "/home/jenkins/minikube-integration/22186-372245/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-rc.1" took 530.72735ms
	I1217 19:59:07.886173  613002 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-rc.1 -> /home/jenkins/minikube-integration/22186-372245/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-rc.1 succeeded
	I1217 19:59:08.795301  613002 cache.go:157] /home/jenkins/minikube-integration/22186-372245/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-rc.1 exists
	I1217 19:59:08.795333  613002 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-rc.1" -> "/home/jenkins/minikube-integration/22186-372245/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-rc.1" took 1.439801855s
	I1217 19:59:08.795358  613002 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-rc.1 -> /home/jenkins/minikube-integration/22186-372245/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-rc.1 succeeded
	I1217 19:59:08.856112  613002 cache.go:157] /home/jenkins/minikube-integration/22186-372245/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-rc.1 exists
	I1217 19:59:08.856151  613002 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-rc.1" -> "/home/jenkins/minikube-integration/22186-372245/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-rc.1" took 1.500676672s
	I1217 19:59:08.856172  613002 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-rc.1 -> /home/jenkins/minikube-integration/22186-372245/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-rc.1 succeeded
	I1217 19:59:08.866708  613002 cache.go:157] /home/jenkins/minikube-integration/22186-372245/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.6-0 exists
	I1217 19:59:08.866747  613002 cache.go:96] cache image "registry.k8s.io/etcd:3.6.6-0" -> "/home/jenkins/minikube-integration/22186-372245/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.6-0" took 1.511285042s
	I1217 19:59:08.866766  613002 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.6-0 -> /home/jenkins/minikube-integration/22186-372245/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.6-0 succeeded
	I1217 19:59:08.882861  613002 cache.go:157] /home/jenkins/minikube-integration/22186-372245/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-rc.1 exists
	I1217 19:59:08.882894  613002 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-rc.1" -> "/home/jenkins/minikube-integration/22186-372245/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-rc.1" took 1.527516133s
	I1217 19:59:08.882911  613002 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-rc.1 -> /home/jenkins/minikube-integration/22186-372245/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-rc.1 succeeded
	I1217 19:59:08.920683  613002 cache.go:157] /home/jenkins/minikube-integration/22186-372245/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1217 19:59:08.920719  613002 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/22186-372245/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1" took 1.5651038s
	I1217 19:59:08.920735  613002 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/22186-372245/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1217 19:59:08.920755  613002 cache.go:87] Successfully saved all images to host disk.
	I1217 19:59:10.751842  613002 cli_runner.go:217] Completed: docker run --rm --name no-preload-832842-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-832842 --entrypoint /usr/bin/test -v no-preload-832842:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 -d /var/lib: (3.214156278s)
	I1217 19:59:10.751877  613002 oci.go:107] Successfully prepared a docker volume no-preload-832842
	I1217 19:59:10.751928  613002 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime crio
	W1217 19:59:10.752005  613002 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1217 19:59:10.752042  613002 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1217 19:59:10.752129  613002 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1217 19:59:10.816044  613002 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname no-preload-832842 --name no-preload-832842 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-832842 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=no-preload-832842 --network no-preload-832842 --ip 192.168.103.2 --volume no-preload-832842:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0
	I1217 19:59:11.106025  613002 cli_runner.go:164] Run: docker container inspect no-preload-832842 --format={{.State.Running}}
	I1217 19:59:11.129937  613002 cli_runner.go:164] Run: docker container inspect no-preload-832842 --format={{.State.Status}}
	I1217 19:59:11.157039  613002 cli_runner.go:164] Run: docker exec no-preload-832842 stat /var/lib/dpkg/alternatives/iptables
	I1217 19:59:11.207580  613002 oci.go:144] the created container "no-preload-832842" has a running status.
	I1217 19:59:11.207623  613002 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22186-372245/.minikube/machines/no-preload-832842/id_rsa...
	I1217 19:59:11.328962  613002 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22186-372245/.minikube/machines/no-preload-832842/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1217 19:59:11.361092  613002 cli_runner.go:164] Run: docker container inspect no-preload-832842 --format={{.State.Status}}
	I1217 19:59:11.384868  613002 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1217 19:59:11.384898  613002 kic_runner.go:114] Args: [docker exec --privileged no-preload-832842 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1217 19:59:11.432305  613002 cli_runner.go:164] Run: docker container inspect no-preload-832842 --format={{.State.Status}}
	I1217 19:59:11.460384  613002 machine.go:94] provisionDockerMachine start ...
	I1217 19:59:11.460491  613002 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-832842
	I1217 19:59:11.485214  613002 main.go:143] libmachine: Using SSH client type: native
	I1217 19:59:11.485606  613002 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33438 <nil> <nil>}
	I1217 19:59:11.485626  613002 main.go:143] libmachine: About to run SSH command:
	hostname
	I1217 19:59:11.641481  613002 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-832842
	
	I1217 19:59:11.641514  613002 ubuntu.go:182] provisioning hostname "no-preload-832842"
	I1217 19:59:11.641584  613002 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-832842
	I1217 19:59:11.664626  613002 main.go:143] libmachine: Using SSH client type: native
	I1217 19:59:11.666007  613002 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33438 <nil> <nil>}
	I1217 19:59:11.666046  613002 main.go:143] libmachine: About to run SSH command:
	sudo hostname no-preload-832842 && echo "no-preload-832842" | sudo tee /etc/hostname
	I1217 19:59:11.825329  613002 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-832842
	
	I1217 19:59:11.825437  613002 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-832842
	I1217 19:59:11.846118  613002 main.go:143] libmachine: Using SSH client type: native
	I1217 19:59:11.846361  613002 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33438 <nil> <nil>}
	I1217 19:59:11.846388  613002 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-832842' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-832842/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-832842' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1217 19:59:11.991193  613002 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1217 19:59:11.991226  613002 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22186-372245/.minikube CaCertPath:/home/jenkins/minikube-integration/22186-372245/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22186-372245/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22186-372245/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22186-372245/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22186-372245/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22186-372245/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22186-372245/.minikube}
	I1217 19:59:11.991283  613002 ubuntu.go:190] setting up certificates
	I1217 19:59:11.991300  613002 provision.go:84] configureAuth start
	I1217 19:59:11.991367  613002 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-832842
	I1217 19:59:12.009813  613002 provision.go:143] copyHostCerts
	I1217 19:59:12.009872  613002 exec_runner.go:144] found /home/jenkins/minikube-integration/22186-372245/.minikube/ca.pem, removing ...
	I1217 19:59:12.009881  613002 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22186-372245/.minikube/ca.pem
	I1217 19:59:12.009958  613002 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22186-372245/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22186-372245/.minikube/ca.pem (1082 bytes)
	I1217 19:59:12.010050  613002 exec_runner.go:144] found /home/jenkins/minikube-integration/22186-372245/.minikube/cert.pem, removing ...
	I1217 19:59:12.010058  613002 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22186-372245/.minikube/cert.pem
	I1217 19:59:12.010116  613002 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22186-372245/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22186-372245/.minikube/cert.pem (1123 bytes)
	I1217 19:59:12.010189  613002 exec_runner.go:144] found /home/jenkins/minikube-integration/22186-372245/.minikube/key.pem, removing ...
	I1217 19:59:12.010198  613002 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22186-372245/.minikube/key.pem
	I1217 19:59:12.010223  613002 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22186-372245/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22186-372245/.minikube/key.pem (1675 bytes)
	I1217 19:59:12.010278  613002 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22186-372245/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22186-372245/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22186-372245/.minikube/certs/ca-key.pem org=jenkins.no-preload-832842 san=[127.0.0.1 192.168.103.2 localhost minikube no-preload-832842]
	I1217 19:59:07.465376  596882 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1217 19:59:12.197779  613002 provision.go:177] copyRemoteCerts
	I1217 19:59:12.197839  613002 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1217 19:59:12.197880  613002 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-832842
	I1217 19:59:12.216040  613002 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33438 SSHKeyPath:/home/jenkins/minikube-integration/22186-372245/.minikube/machines/no-preload-832842/id_rsa Username:docker}
	I1217 19:59:12.318758  613002 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1217 19:59:12.338513  613002 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1217 19:59:12.357237  613002 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1217 19:59:12.375227  613002 provision.go:87] duration metric: took 383.904667ms to configureAuth
	I1217 19:59:12.375269  613002 ubuntu.go:206] setting minikube options for container-runtime
	I1217 19:59:12.375433  613002 config.go:182] Loaded profile config "no-preload-832842": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1217 19:59:12.375535  613002 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-832842
	I1217 19:59:12.393214  613002 main.go:143] libmachine: Using SSH client type: native
	I1217 19:59:12.393468  613002 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33438 <nil> <nil>}
	I1217 19:59:12.393494  613002 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1217 19:59:12.685460  613002 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1217 19:59:12.685487  613002 machine.go:97] duration metric: took 1.225074671s to provisionDockerMachine
	I1217 19:59:12.685498  613002 client.go:176] duration metric: took 5.3028708s to LocalClient.Create
	I1217 19:59:12.685518  613002 start.go:167] duration metric: took 5.302932427s to libmachine.API.Create "no-preload-832842"
	I1217 19:59:12.685527  613002 start.go:293] postStartSetup for "no-preload-832842" (driver="docker")
	I1217 19:59:12.685548  613002 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1217 19:59:12.685624  613002 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1217 19:59:12.685665  613002 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-832842
	I1217 19:59:12.704175  613002 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33438 SSHKeyPath:/home/jenkins/minikube-integration/22186-372245/.minikube/machines/no-preload-832842/id_rsa Username:docker}
	I1217 19:59:12.810348  613002 ssh_runner.go:195] Run: cat /etc/os-release
	I1217 19:59:12.814412  613002 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1217 19:59:12.814439  613002 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1217 19:59:12.814451  613002 filesync.go:126] Scanning /home/jenkins/minikube-integration/22186-372245/.minikube/addons for local assets ...
	I1217 19:59:12.814515  613002 filesync.go:126] Scanning /home/jenkins/minikube-integration/22186-372245/.minikube/files for local assets ...
	I1217 19:59:12.814588  613002 filesync.go:149] local asset: /home/jenkins/minikube-integration/22186-372245/.minikube/files/etc/ssl/certs/3757972.pem -> 3757972.pem in /etc/ssl/certs
	I1217 19:59:12.814682  613002 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1217 19:59:12.823021  613002 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/files/etc/ssl/certs/3757972.pem --> /etc/ssl/certs/3757972.pem (1708 bytes)
	I1217 19:59:12.845396  613002 start.go:296] duration metric: took 159.846518ms for postStartSetup
	I1217 19:59:12.845766  613002 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-832842
	I1217 19:59:12.863764  613002 profile.go:143] Saving config to /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/no-preload-832842/config.json ...
	I1217 19:59:12.864113  613002 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 19:59:12.864171  613002 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-832842
	I1217 19:59:12.881736  613002 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33438 SSHKeyPath:/home/jenkins/minikube-integration/22186-372245/.minikube/machines/no-preload-832842/id_rsa Username:docker}
	I1217 19:59:12.982690  613002 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1217 19:59:12.987360  613002 start.go:128] duration metric: took 5.607443467s to createHost
	I1217 19:59:12.987392  613002 start.go:83] releasing machines lock for "no-preload-832842", held for 5.607606655s
	I1217 19:59:12.987460  613002 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-832842
	I1217 19:59:13.005664  613002 ssh_runner.go:195] Run: cat /version.json
	I1217 19:59:13.005717  613002 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1217 19:59:13.005729  613002 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-832842
	I1217 19:59:13.005775  613002 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-832842
	I1217 19:59:13.025320  613002 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33438 SSHKeyPath:/home/jenkins/minikube-integration/22186-372245/.minikube/machines/no-preload-832842/id_rsa Username:docker}
	I1217 19:59:13.025439  613002 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33438 SSHKeyPath:/home/jenkins/minikube-integration/22186-372245/.minikube/machines/no-preload-832842/id_rsa Username:docker}
	I1217 19:59:13.177139  613002 ssh_runner.go:195] Run: systemctl --version
	I1217 19:59:13.183940  613002 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1217 19:59:13.219250  613002 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1217 19:59:13.223980  613002 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1217 19:59:13.224060  613002 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1217 19:59:13.250609  613002 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1217 19:59:13.250634  613002 start.go:496] detecting cgroup driver to use...
	I1217 19:59:13.250675  613002 detect.go:190] detected "systemd" cgroup driver on host os
	I1217 19:59:13.250726  613002 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1217 19:59:13.266745  613002 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1217 19:59:13.279174  613002 docker.go:218] disabling cri-docker service (if available) ...
	I1217 19:59:13.279236  613002 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1217 19:59:13.297069  613002 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1217 19:59:13.315051  613002 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1217 19:59:13.401680  613002 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1217 19:59:13.491655  613002 docker.go:234] disabling docker service ...
	I1217 19:59:13.491716  613002 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1217 19:59:13.512752  613002 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1217 19:59:13.526662  613002 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1217 19:59:13.608428  613002 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1217 19:59:13.693855  613002 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1217 19:59:13.707271  613002 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1217 19:59:13.722243  613002 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1217 19:59:13.722310  613002 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 19:59:13.732970  613002 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1217 19:59:13.733041  613002 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 19:59:13.742144  613002 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 19:59:13.751182  613002 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 19:59:13.759924  613002 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1217 19:59:13.768012  613002 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 19:59:13.777071  613002 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 19:59:13.791130  613002 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 19:59:13.800404  613002 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1217 19:59:13.808001  613002 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1217 19:59:13.815592  613002 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 19:59:13.900685  613002 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1217 19:59:14.046676  613002 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1217 19:59:14.046750  613002 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1217 19:59:14.051043  613002 start.go:564] Will wait 60s for crictl version
	I1217 19:59:14.051116  613002 ssh_runner.go:195] Run: which crictl
	I1217 19:59:14.054924  613002 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1217 19:59:14.080861  613002 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1217 19:59:14.080956  613002 ssh_runner.go:195] Run: crio --version
	I1217 19:59:14.109659  613002 ssh_runner.go:195] Run: crio --version
	I1217 19:59:14.140655  613002 out.go:179] * Preparing Kubernetes v1.35.0-rc.1 on CRI-O 1.34.3 ...
	I1217 19:59:10.486440  612025 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22186-372245/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-894575:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 -I lz4 -xf /preloaded.tar -C /extractDir: (4.828689754s)
	I1217 19:59:10.486482  612025 kic.go:203] duration metric: took 4.828851778s to extract preloaded images to volume ...
	W1217 19:59:10.486574  612025 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1217 19:59:10.486604  612025 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1217 19:59:10.486647  612025 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1217 19:59:10.547327  612025 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname old-k8s-version-894575 --name old-k8s-version-894575 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-894575 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=old-k8s-version-894575 --network old-k8s-version-894575 --ip 192.168.85.2 --volume old-k8s-version-894575:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0
	I1217 19:59:10.885351  612025 cli_runner.go:164] Run: docker container inspect old-k8s-version-894575 --format={{.State.Running}}
	I1217 19:59:10.906110  612025 cli_runner.go:164] Run: docker container inspect old-k8s-version-894575 --format={{.State.Status}}
	I1217 19:59:10.927145  612025 cli_runner.go:164] Run: docker exec old-k8s-version-894575 stat /var/lib/dpkg/alternatives/iptables
	I1217 19:59:10.987110  612025 oci.go:144] the created container "old-k8s-version-894575" has a running status.
	I1217 19:59:10.987161  612025 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22186-372245/.minikube/machines/old-k8s-version-894575/id_rsa...
	I1217 19:59:11.050534  612025 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22186-372245/.minikube/machines/old-k8s-version-894575/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1217 19:59:11.081072  612025 cli_runner.go:164] Run: docker container inspect old-k8s-version-894575 --format={{.State.Status}}
	I1217 19:59:11.102728  612025 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1217 19:59:11.102752  612025 kic_runner.go:114] Args: [docker exec --privileged old-k8s-version-894575 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1217 19:59:11.157152  612025 cli_runner.go:164] Run: docker container inspect old-k8s-version-894575 --format={{.State.Status}}
	I1217 19:59:11.180656  612025 machine.go:94] provisionDockerMachine start ...
	I1217 19:59:11.180828  612025 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-894575
	I1217 19:59:11.203202  612025 main.go:143] libmachine: Using SSH client type: native
	I1217 19:59:11.203572  612025 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33433 <nil> <nil>}
	I1217 19:59:11.203590  612025 main.go:143] libmachine: About to run SSH command:
	hostname
	I1217 19:59:11.204528  612025 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:55912->127.0.0.1:33433: read: connection reset by peer
	I1217 19:59:14.354070  612025 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-894575
	
	I1217 19:59:14.354285  612025 ubuntu.go:182] provisioning hostname "old-k8s-version-894575"
	I1217 19:59:14.355054  612025 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-894575
	I1217 19:59:14.381720  612025 main.go:143] libmachine: Using SSH client type: native
	I1217 19:59:14.382061  612025 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33433 <nil> <nil>}
	I1217 19:59:14.382091  612025 main.go:143] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-894575 && echo "old-k8s-version-894575" | sudo tee /etc/hostname
	I1217 19:59:14.560599  612025 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-894575
	
	I1217 19:59:14.560701  612025 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-894575
	I1217 19:59:14.580139  612025 main.go:143] libmachine: Using SSH client type: native
	I1217 19:59:14.580356  612025 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33433 <nil> <nil>}
	I1217 19:59:14.580373  612025 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-894575' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-894575/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-894575' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1217 19:59:14.732018  612025 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1217 19:59:14.732087  612025 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22186-372245/.minikube CaCertPath:/home/jenkins/minikube-integration/22186-372245/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22186-372245/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22186-372245/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22186-372245/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22186-372245/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22186-372245/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22186-372245/.minikube}
	I1217 19:59:14.732137  612025 ubuntu.go:190] setting up certificates
	I1217 19:59:14.732152  612025 provision.go:84] configureAuth start
	I1217 19:59:14.732233  612025 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-894575
	I1217 19:59:14.755418  612025 provision.go:143] copyHostCerts
	I1217 19:59:14.755500  612025 exec_runner.go:144] found /home/jenkins/minikube-integration/22186-372245/.minikube/cert.pem, removing ...
	I1217 19:59:14.755523  612025 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22186-372245/.minikube/cert.pem
	I1217 19:59:14.755618  612025 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22186-372245/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22186-372245/.minikube/cert.pem (1123 bytes)
	I1217 19:59:14.755773  612025 exec_runner.go:144] found /home/jenkins/minikube-integration/22186-372245/.minikube/key.pem, removing ...
	I1217 19:59:14.755789  612025 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22186-372245/.minikube/key.pem
	I1217 19:59:14.755838  612025 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22186-372245/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22186-372245/.minikube/key.pem (1675 bytes)
	I1217 19:59:14.755978  612025 exec_runner.go:144] found /home/jenkins/minikube-integration/22186-372245/.minikube/ca.pem, removing ...
	I1217 19:59:14.755994  612025 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22186-372245/.minikube/ca.pem
	I1217 19:59:14.756040  612025 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22186-372245/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22186-372245/.minikube/ca.pem (1082 bytes)
	I1217 19:59:14.756232  612025 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22186-372245/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22186-372245/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22186-372245/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-894575 san=[127.0.0.1 192.168.85.2 localhost minikube old-k8s-version-894575]
	I1217 19:59:14.848760  612025 provision.go:177] copyRemoteCerts
	I1217 19:59:14.848841  612025 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1217 19:59:14.848906  612025 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-894575
	I1217 19:59:14.873616  612025 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/22186-372245/.minikube/machines/old-k8s-version-894575/id_rsa Username:docker}
	I1217 19:59:14.142269  613002 cli_runner.go:164] Run: docker network inspect no-preload-832842 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1217 19:59:14.161365  613002 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1217 19:59:14.165671  613002 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 19:59:14.176304  613002 kubeadm.go:884] updating cluster {Name:no-preload-832842 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:no-preload-832842 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFi
rmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1217 19:59:14.176440  613002 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime crio
	I1217 19:59:14.176475  613002 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 19:59:14.202465  613002 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.35.0-rc.1". assuming images are not preloaded.
	I1217 19:59:14.202493  613002 cache_images.go:90] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.35.0-rc.1 registry.k8s.io/kube-controller-manager:v1.35.0-rc.1 registry.k8s.io/kube-scheduler:v1.35.0-rc.1 registry.k8s.io/kube-proxy:v1.35.0-rc.1 registry.k8s.io/pause:3.10.1 registry.k8s.io/etcd:3.6.6-0 registry.k8s.io/coredns/coredns:v1.13.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1217 19:59:14.202556  613002 image.go:138] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1217 19:59:14.202583  613002 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.35.0-rc.1
	I1217 19:59:14.202607  613002 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.6-0
	I1217 19:59:14.202633  613002 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	I1217 19:59:14.202640  613002 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1217 19:59:14.202681  613002 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	I1217 19:59:14.202701  613002 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.13.1
	I1217 19:59:14.202737  613002 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	I1217 19:59:14.203897  613002 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.35.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	I1217 19:59:14.203909  613002 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1217 19:59:14.203897  613002 image.go:181] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1217 19:59:14.203920  613002 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.35.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.35.0-rc.1
	I1217 19:59:14.203908  613002 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.35.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	I1217 19:59:14.203959  613002 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.6-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.6-0
	I1217 19:59:14.203997  613002 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.13.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.13.1
	I1217 19:59:14.204003  613002 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.35.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	I1217 19:59:14.323055  613002 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	I1217 19:59:14.323436  613002 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.13.1
	I1217 19:59:14.326904  613002 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.6.6-0
	I1217 19:59:14.329417  613002 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	I1217 19:59:14.335829  613002 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	I1217 19:59:14.346226  613002 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.35.0-rc.1
	I1217 19:59:14.348116  613002 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10.1
	I1217 19:59:14.372870  613002 cache_images.go:118] "registry.k8s.io/kube-scheduler:v1.35.0-rc.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.35.0-rc.1" does not exist at hash "73f80cdc073daa4d501207f9e6dec1fa9eea5f27e8d347b8a0c4bad8811eecdc" in container runtime
	I1217 19:59:14.372935  613002 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	I1217 19:59:14.372988  613002 ssh_runner.go:195] Run: which crictl
	I1217 19:59:14.374889  613002 cache_images.go:118] "registry.k8s.io/coredns/coredns:v1.13.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.13.1" does not exist at hash "aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139" in container runtime
	I1217 19:59:14.374945  613002 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.13.1
	I1217 19:59:14.375003  613002 ssh_runner.go:195] Run: which crictl
	I1217 19:59:14.381999  613002 cache_images.go:118] "registry.k8s.io/etcd:3.6.6-0" needs transfer: "registry.k8s.io/etcd:3.6.6-0" does not exist at hash "0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2" in container runtime
	I1217 19:59:14.382042  613002 cri.go:218] Removing image: registry.k8s.io/etcd:3.6.6-0
	I1217 19:59:14.382111  613002 ssh_runner.go:195] Run: which crictl
	I1217 19:59:14.384169  613002 cache_images.go:118] "registry.k8s.io/kube-apiserver:v1.35.0-rc.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.35.0-rc.1" does not exist at hash "58865405a13bccac1d74bc3f446dddd22e6ef0d7ee8b52363c86dd31838976ce" in container runtime
	I1217 19:59:14.384215  613002 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	I1217 19:59:14.384265  613002 ssh_runner.go:195] Run: which crictl
	I1217 19:59:14.390916  613002 cache_images.go:118] "registry.k8s.io/kube-controller-manager:v1.35.0-rc.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.35.0-rc.1" does not exist at hash "5032a56602e1b9bd8856699701b6148aa1b9901d05b61f893df3b57f84aca614" in container runtime
	I1217 19:59:14.390979  613002 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	I1217 19:59:14.391038  613002 ssh_runner.go:195] Run: which crictl
	I1217 19:59:14.400541  613002 cache_images.go:118] "registry.k8s.io/kube-proxy:v1.35.0-rc.1" needs transfer: "registry.k8s.io/kube-proxy:v1.35.0-rc.1" does not exist at hash "af0321f3a4f388cfb978464739c323ebf891a7b0b50cdfd7179e92f141dad42a" in container runtime
	I1217 19:59:14.400598  613002 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.35.0-rc.1
	I1217 19:59:14.400628  613002 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	I1217 19:59:14.400553  613002 cache_images.go:118] "registry.k8s.io/pause:3.10.1" needs transfer: "registry.k8s.io/pause:3.10.1" does not exist at hash "cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f" in container runtime
	I1217 19:59:14.400651  613002 ssh_runner.go:195] Run: which crictl
	I1217 19:59:14.400676  613002 cri.go:218] Removing image: registry.k8s.io/pause:3.10.1
	I1217 19:59:14.400709  613002 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1217 19:59:14.400743  613002 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.6-0
	I1217 19:59:14.400770  613002 ssh_runner.go:195] Run: which crictl
	I1217 19:59:14.400792  613002 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	I1217 19:59:14.400809  613002 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	I1217 19:59:14.406289  613002 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-rc.1
	I1217 19:59:14.438613  613002 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	I1217 19:59:14.444038  613002 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1217 19:59:14.444071  613002 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.6-0
	I1217 19:59:14.444240  613002 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1217 19:59:14.444327  613002 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	I1217 19:59:14.444334  613002 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	I1217 19:59:14.448653  613002 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-rc.1
	I1217 19:59:14.478479  613002 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	I1217 19:59:14.482581  613002 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.6-0
	I1217 19:59:14.482813  613002 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1217 19:59:14.487002  613002 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	I1217 19:59:14.487054  613002 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	I1217 19:59:14.487002  613002 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1217 19:59:14.487123  613002 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-rc.1
	I1217 19:59:14.517672  613002 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22186-372245/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-rc.1
	I1217 19:59:14.517786  613002 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.35.0-rc.1
	I1217 19:59:14.520371  613002 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22186-372245/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.6-0
	I1217 19:59:14.520488  613002 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.6-0
	I1217 19:59:14.521037  613002 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1217 19:59:14.527267  613002 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22186-372245/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-rc.1
	I1217 19:59:14.527304  613002 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22186-372245/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-rc.1
	I1217 19:59:14.527317  613002 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22186-372245/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1
	I1217 19:59:14.527347  613002 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22186-372245/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-rc.1
	I1217 19:59:14.527367  613002 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.35.0-rc.1
	I1217 19:59:14.527395  613002 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.35.0-rc.1
	I1217 19:59:14.527397  613002 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.35.0-rc.1: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.35.0-rc.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-scheduler_v1.35.0-rc.1': No such file or directory
	I1217 19:59:14.527426  613002 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.6.6-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.6-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.6.6-0': No such file or directory
	I1217 19:59:14.527437  613002 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-rc.1 --> /var/lib/minikube/images/kube-scheduler_v1.35.0-rc.1 (17248256 bytes)
	I1217 19:59:14.527449  613002 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.6-0 --> /var/lib/minikube/images/etcd_3.6.6-0 (23653376 bytes)
	I1217 19:59:14.527416  613002 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.35.0-rc.1
	I1217 19:59:14.527396  613002 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.13.1
	I1217 19:59:14.571040  613002 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22186-372245/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1
	I1217 19:59:14.571039  613002 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.35.0-rc.1: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.35.0-rc.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-proxy_v1.35.0-rc.1': No such file or directory
	I1217 19:59:14.571120  613002 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-rc.1 --> /var/lib/minikube/images/kube-proxy_v1.35.0-rc.1 (25791488 bytes)
	I1217 19:59:14.571280  613002 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1
	I1217 19:59:14.571289  613002 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.35.0-rc.1: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.35.0-rc.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-apiserver_v1.35.0-rc.1': No such file or directory
	I1217 19:59:14.571313  613002 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-rc.1 --> /var/lib/minikube/images/kube-apiserver_v1.35.0-rc.1 (27697152 bytes)
	I1217 19:59:14.571319  613002 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.35.0-rc.1: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.35.0-rc.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-controller-manager_v1.35.0-rc.1': No such file or directory
	I1217 19:59:14.571291  613002 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.13.1: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.13.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.13.1': No such file or directory
	I1217 19:59:14.571339  613002 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-rc.1 --> /var/lib/minikube/images/kube-controller-manager_v1.35.0-rc.1 (23144960 bytes)
	I1217 19:59:14.571342  613002 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1 --> /var/lib/minikube/images/coredns_v1.13.1 (23562752 bytes)
	I1217 19:59:14.699873  613002 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.10.1: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.10.1': No such file or directory
	I1217 19:59:14.699916  613002 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 --> /var/lib/minikube/images/pause_3.10.1 (321024 bytes)
	I1217 19:59:14.795465  613002 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.10.1
	I1217 19:59:14.795534  613002 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.10.1
	I1217 19:59:15.145896  613002 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1217 19:59:15.265623  613002 cache_images.go:118] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1217 19:59:15.265638  613002 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22186-372245/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 from cache
	I1217 19:59:15.265665  613002 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1217 19:59:15.265690  613002 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.35.0-rc.1
	I1217 19:59:15.265707  613002 ssh_runner.go:195] Run: which crictl
	I1217 19:59:15.265740  613002 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.35.0-rc.1
	I1217 19:59:16.405593  613002 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.35.0-rc.1: (1.139822545s)
	I1217 19:59:16.405633  613002 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22186-372245/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-rc.1 from cache
	I1217 19:59:16.405641  613002 ssh_runner.go:235] Completed: which crictl: (1.139907704s)
	I1217 19:59:16.405665  613002 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.6.6-0
	I1217 19:59:16.405708  613002 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.6.6-0
	I1217 19:59:16.405713  613002 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1217 19:59:12.465990  596882 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1217 19:59:12.466047  596882 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1217 19:59:14.988394  612025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1217 19:59:15.087322  612025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1217 19:59:15.192226  612025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1217 19:59:15.214454  612025 provision.go:87] duration metric: took 482.278515ms to configureAuth
	I1217 19:59:15.214498  612025 ubuntu.go:206] setting minikube options for container-runtime
	I1217 19:59:15.214720  612025 config.go:182] Loaded profile config "old-k8s-version-894575": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1217 19:59:15.214877  612025 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-894575
	I1217 19:59:15.238011  612025 main.go:143] libmachine: Using SSH client type: native
	I1217 19:59:15.238370  612025 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33433 <nil> <nil>}
	I1217 19:59:15.238399  612025 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1217 19:59:15.554712  612025 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1217 19:59:15.554746  612025 machine.go:97] duration metric: took 4.374010357s to provisionDockerMachine
	I1217 19:59:15.554760  612025 client.go:176] duration metric: took 10.471842965s to LocalClient.Create
	I1217 19:59:15.554781  612025 start.go:167] duration metric: took 10.471908055s to libmachine.API.Create "old-k8s-version-894575"
	I1217 19:59:15.554791  612025 start.go:293] postStartSetup for "old-k8s-version-894575" (driver="docker")
	I1217 19:59:15.554806  612025 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1217 19:59:15.554870  612025 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1217 19:59:15.554948  612025 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-894575
	I1217 19:59:15.574541  612025 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/22186-372245/.minikube/machines/old-k8s-version-894575/id_rsa Username:docker}
	I1217 19:59:15.683411  612025 ssh_runner.go:195] Run: cat /etc/os-release
	I1217 19:59:15.688001  612025 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1217 19:59:15.688037  612025 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1217 19:59:15.688052  612025 filesync.go:126] Scanning /home/jenkins/minikube-integration/22186-372245/.minikube/addons for local assets ...
	I1217 19:59:15.688146  612025 filesync.go:126] Scanning /home/jenkins/minikube-integration/22186-372245/.minikube/files for local assets ...
	I1217 19:59:15.688250  612025 filesync.go:149] local asset: /home/jenkins/minikube-integration/22186-372245/.minikube/files/etc/ssl/certs/3757972.pem -> 3757972.pem in /etc/ssl/certs
	I1217 19:59:15.688377  612025 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1217 19:59:15.697743  612025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/files/etc/ssl/certs/3757972.pem --> /etc/ssl/certs/3757972.pem (1708 bytes)
	I1217 19:59:15.722737  612025 start.go:296] duration metric: took 167.925036ms for postStartSetup
	I1217 19:59:15.723472  612025 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-894575
	I1217 19:59:15.748004  612025 profile.go:143] Saving config to /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/old-k8s-version-894575/config.json ...
	I1217 19:59:15.748362  612025 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 19:59:15.748429  612025 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-894575
	I1217 19:59:15.772713  612025 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/22186-372245/.minikube/machines/old-k8s-version-894575/id_rsa Username:docker}
	I1217 19:59:15.881299  612025 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1217 19:59:15.887512  612025 start.go:128] duration metric: took 10.806894995s to createHost
	I1217 19:59:15.887544  612025 start.go:83] releasing machines lock for "old-k8s-version-894575", held for 10.807110572s
	I1217 19:59:15.887620  612025 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-894575
	I1217 19:59:15.909226  612025 ssh_runner.go:195] Run: cat /version.json
	I1217 19:59:15.909243  612025 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1217 19:59:15.909291  612025 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-894575
	I1217 19:59:15.909343  612025 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-894575
	I1217 19:59:15.932854  612025 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/22186-372245/.minikube/machines/old-k8s-version-894575/id_rsa Username:docker}
	I1217 19:59:15.932917  612025 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/22186-372245/.minikube/machines/old-k8s-version-894575/id_rsa Username:docker}
	I1217 19:59:16.038284  612025 ssh_runner.go:195] Run: systemctl --version
	I1217 19:59:16.114179  612025 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1217 19:59:16.160800  612025 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1217 19:59:16.166297  612025 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1217 19:59:16.166372  612025 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1217 19:59:16.194230  612025 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1217 19:59:16.194262  612025 start.go:496] detecting cgroup driver to use...
	I1217 19:59:16.194302  612025 detect.go:190] detected "systemd" cgroup driver on host os
	I1217 19:59:16.194355  612025 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1217 19:59:16.212431  612025 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1217 19:59:16.226442  612025 docker.go:218] disabling cri-docker service (if available) ...
	I1217 19:59:16.226509  612025 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1217 19:59:16.245823  612025 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1217 19:59:16.264742  612025 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1217 19:59:16.373692  612025 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1217 19:59:16.476619  612025 docker.go:234] disabling docker service ...
	I1217 19:59:16.476681  612025 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1217 19:59:16.497483  612025 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1217 19:59:16.510960  612025 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1217 19:59:16.595568  612025 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1217 19:59:16.682352  612025 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1217 19:59:16.695356  612025 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1217 19:59:16.710247  612025 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1217 19:59:16.710309  612025 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 19:59:16.721638  612025 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1217 19:59:16.721703  612025 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 19:59:16.731590  612025 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 19:59:16.741533  612025 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 19:59:16.751481  612025 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1217 19:59:16.760217  612025 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 19:59:16.769347  612025 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 19:59:16.784142  612025 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 19:59:16.794546  612025 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1217 19:59:16.802330  612025 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1217 19:59:16.809983  612025 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 19:59:16.891283  612025 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1217 19:59:17.259133  612025 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1217 19:59:17.259215  612025 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1217 19:59:17.264522  612025 start.go:564] Will wait 60s for crictl version
	I1217 19:59:17.264597  612025 ssh_runner.go:195] Run: which crictl
	I1217 19:59:17.269021  612025 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1217 19:59:17.298337  612025 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1217 19:59:17.298427  612025 ssh_runner.go:195] Run: crio --version
	I1217 19:59:17.328846  612025 ssh_runner.go:195] Run: crio --version
	I1217 19:59:17.360505  612025 out.go:179] * Preparing Kubernetes v1.28.0 on CRI-O 1.34.3 ...
	I1217 19:59:17.361873  612025 cli_runner.go:164] Run: docker network inspect old-k8s-version-894575 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1217 19:59:17.379694  612025 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1217 19:59:17.383968  612025 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 19:59:17.395630  612025 kubeadm.go:884] updating cluster {Name:old-k8s-version-894575 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-894575 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1217 19:59:17.395751  612025 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1217 19:59:17.395793  612025 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 19:59:17.430165  612025 crio.go:514] all images are preloaded for cri-o runtime.
	I1217 19:59:17.430209  612025 crio.go:433] Images already preloaded, skipping extraction
	I1217 19:59:17.430270  612025 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 19:59:17.460476  612025 crio.go:514] all images are preloaded for cri-o runtime.
	I1217 19:59:17.460499  612025 cache_images.go:86] Images are preloaded, skipping loading
	I1217 19:59:17.460508  612025 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.28.0 crio true true} ...
	I1217 19:59:17.460592  612025 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=old-k8s-version-894575 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-894575 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1217 19:59:17.460662  612025 ssh_runner.go:195] Run: crio config
	I1217 19:59:17.514354  612025 cni.go:84] Creating CNI manager for ""
	I1217 19:59:17.514384  612025 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1217 19:59:17.514404  612025 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1217 19:59:17.514430  612025 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.28.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-894575 NodeName:old-k8s-version-894575 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPod
Path:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1217 19:59:17.514606  612025 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "old-k8s-version-894575"
	  kubeletExtraArgs:
	    node-ip: 192.168.85.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1217 19:59:17.514689  612025 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.0
	I1217 19:59:17.523692  612025 binaries.go:51] Found k8s binaries, skipping transfer
	I1217 19:59:17.523768  612025 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1217 19:59:17.534543  612025 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1217 19:59:17.550669  612025 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1217 19:59:17.570716  612025 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2159 bytes)
	I1217 19:59:17.586529  612025 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1217 19:59:17.591490  612025 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 19:59:17.604319  612025 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 19:59:17.709931  612025 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 19:59:17.734439  612025 certs.go:69] Setting up /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/old-k8s-version-894575 for IP: 192.168.85.2
	I1217 19:59:17.734464  612025 certs.go:195] generating shared ca certs ...
	I1217 19:59:17.734487  612025 certs.go:227] acquiring lock for ca certs: {Name:mk6c0a4a99609de13fb0b54aca94f9165cc7856c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 19:59:17.734640  612025 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22186-372245/.minikube/ca.key
	I1217 19:59:17.734689  612025 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22186-372245/.minikube/proxy-client-ca.key
	I1217 19:59:17.734696  612025 certs.go:257] generating profile certs ...
	I1217 19:59:17.734746  612025 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/old-k8s-version-894575/client.key
	I1217 19:59:17.734761  612025 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/old-k8s-version-894575/client.crt with IP's: []
	I1217 19:59:17.791783  612025 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/old-k8s-version-894575/client.crt ...
	I1217 19:59:17.791823  612025 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/old-k8s-version-894575/client.crt: {Name:mka3e7404d2bf2be2c2ad017710d4ae4c61748c4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 19:59:17.792047  612025 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/old-k8s-version-894575/client.key ...
	I1217 19:59:17.792066  612025 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/old-k8s-version-894575/client.key: {Name:mk24de6b250e021196965dc5b704e038970df7f5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 19:59:17.792243  612025 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/old-k8s-version-894575/apiserver.key.42d7654d
	I1217 19:59:17.792271  612025 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/old-k8s-version-894575/apiserver.crt.42d7654d with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1217 19:59:17.899874  612025 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/old-k8s-version-894575/apiserver.crt.42d7654d ...
	I1217 19:59:17.899903  612025 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/old-k8s-version-894575/apiserver.crt.42d7654d: {Name:mke7171f32e441ac885f6f108f6ca622009b6054 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 19:59:17.900116  612025 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/old-k8s-version-894575/apiserver.key.42d7654d ...
	I1217 19:59:17.900135  612025 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/old-k8s-version-894575/apiserver.key.42d7654d: {Name:mke8abbbd0ca8f8ca55c39333f965fe1dc236d23 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 19:59:17.900248  612025 certs.go:382] copying /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/old-k8s-version-894575/apiserver.crt.42d7654d -> /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/old-k8s-version-894575/apiserver.crt
	I1217 19:59:17.900344  612025 certs.go:386] copying /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/old-k8s-version-894575/apiserver.key.42d7654d -> /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/old-k8s-version-894575/apiserver.key
	I1217 19:59:17.900408  612025 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/old-k8s-version-894575/proxy-client.key
	I1217 19:59:17.900424  612025 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/old-k8s-version-894575/proxy-client.crt with IP's: []
	I1217 19:59:18.020162  612025 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/old-k8s-version-894575/proxy-client.crt ...
	I1217 19:59:18.020199  612025 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/old-k8s-version-894575/proxy-client.crt: {Name:mkf6fd390f5ac409002c1ff65bfc5b799802f031 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 19:59:18.020412  612025 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/old-k8s-version-894575/proxy-client.key ...
	I1217 19:59:18.020441  612025 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/old-k8s-version-894575/proxy-client.key: {Name:mk25f53b65f4ff150cc6249c6126fc63cd51dc02 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 19:59:18.020631  612025 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-372245/.minikube/certs/375797.pem (1338 bytes)
	W1217 19:59:18.020672  612025 certs.go:480] ignoring /home/jenkins/minikube-integration/22186-372245/.minikube/certs/375797_empty.pem, impossibly tiny 0 bytes
	I1217 19:59:18.020679  612025 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-372245/.minikube/certs/ca-key.pem (1675 bytes)
	I1217 19:59:18.020705  612025 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-372245/.minikube/certs/ca.pem (1082 bytes)
	I1217 19:59:18.020727  612025 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-372245/.minikube/certs/cert.pem (1123 bytes)
	I1217 19:59:18.020751  612025 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-372245/.minikube/certs/key.pem (1675 bytes)
	I1217 19:59:18.020791  612025 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-372245/.minikube/files/etc/ssl/certs/3757972.pem (1708 bytes)
	I1217 19:59:18.021428  612025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1217 19:59:18.040881  612025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1217 19:59:18.059705  612025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1217 19:59:18.079896  612025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1217 19:59:18.100632  612025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/old-k8s-version-894575/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1217 19:59:18.121337  612025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/old-k8s-version-894575/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1217 19:59:18.141038  612025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/old-k8s-version-894575/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1217 19:59:18.161305  612025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/old-k8s-version-894575/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1217 19:59:18.182152  612025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 19:59:18.202628  612025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/certs/375797.pem --> /usr/share/ca-certificates/375797.pem (1338 bytes)
	I1217 19:59:18.223178  612025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/files/etc/ssl/certs/3757972.pem --> /usr/share/ca-certificates/3757972.pem (1708 bytes)
	I1217 19:59:18.242163  612025 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1217 19:59:18.255610  612025 ssh_runner.go:195] Run: openssl version
	I1217 19:59:18.262101  612025 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1217 19:59:18.270728  612025 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1217 19:59:18.278801  612025 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1217 19:59:18.283421  612025 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 19:24 /usr/share/ca-certificates/minikubeCA.pem
	I1217 19:59:18.283487  612025 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1217 19:59:18.317952  612025 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1217 19:59:18.326465  612025 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1217 19:59:18.334353  612025 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/375797.pem
	I1217 19:59:18.343067  612025 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/375797.pem /etc/ssl/certs/375797.pem
	I1217 19:59:18.352047  612025 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/375797.pem
	I1217 19:59:18.356406  612025 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 17 19:32 /usr/share/ca-certificates/375797.pem
	I1217 19:59:18.356471  612025 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/375797.pem
	I1217 19:59:18.391443  612025 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1217 19:59:18.400089  612025 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/375797.pem /etc/ssl/certs/51391683.0
	I1217 19:59:18.408903  612025 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3757972.pem
	I1217 19:59:18.417744  612025 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3757972.pem /etc/ssl/certs/3757972.pem
	I1217 19:59:18.427264  612025 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3757972.pem
	I1217 19:59:18.431630  612025 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 17 19:32 /usr/share/ca-certificates/3757972.pem
	I1217 19:59:18.431695  612025 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3757972.pem
	I1217 19:59:18.473630  612025 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1217 19:59:18.484003  612025 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/3757972.pem /etc/ssl/certs/3ec20f2e.0
	I1217 19:59:18.493785  612025 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1217 19:59:18.498460  612025 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1217 19:59:18.498535  612025 kubeadm.go:401] StartCluster: {Name:old-k8s-version-894575 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-894575 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwar
ePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 19:59:18.498639  612025 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1217 19:59:18.498702  612025 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 19:59:18.530102  612025 cri.go:89] found id: ""
	I1217 19:59:18.530180  612025 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1217 19:59:18.539061  612025 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1217 19:59:18.547581  612025 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1217 19:59:18.547648  612025 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1217 19:59:18.555764  612025 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1217 19:59:18.555783  612025 kubeadm.go:158] found existing configuration files:
	
	I1217 19:59:18.555826  612025 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1217 19:59:18.563928  612025 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1217 19:59:18.563991  612025 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1217 19:59:18.571776  612025 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1217 19:59:18.580768  612025 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1217 19:59:18.580821  612025 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1217 19:59:18.588420  612025 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1217 19:59:18.596488  612025 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1217 19:59:18.596553  612025 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1217 19:59:18.604274  612025 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1217 19:59:18.614398  612025 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1217 19:59:18.614466  612025 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1217 19:59:18.625485  612025 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.28.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1217 19:59:18.725267  612025 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1045-gcp\n", err: exit status 1
	I1217 19:59:18.811521  612025 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1217 19:59:17.868446  613002 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.6.6-0: (1.462713588s)
	I1217 19:59:17.868476  613002 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22186-372245/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.6-0 from cache
	I1217 19:59:17.868503  613002 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.35.0-rc.1
	I1217 19:59:17.868563  613002 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.35.0-rc.1
	I1217 19:59:17.868504  613002 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.462771195s)
	I1217 19:59:17.868667  613002 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1217 19:59:19.108520  613002 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.35.0-rc.1: (1.239930447s)
	I1217 19:59:19.108554  613002 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22186-372245/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-rc.1 from cache
	I1217 19:59:19.108581  613002 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.13.1
	I1217 19:59:19.108581  613002 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.239887056s)
	I1217 19:59:19.108648  613002 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.13.1
	I1217 19:59:19.108653  613002 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1217 19:59:20.379356  613002 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.270666145s)
	I1217 19:59:20.379428  613002 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22186-372245/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1217 19:59:20.379419  613002 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.13.1: (1.270743543s)
	I1217 19:59:20.379456  613002 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22186-372245/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1 from cache
	I1217 19:59:20.379487  613002 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.35.0-rc.1
	I1217 19:59:20.379525  613002 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1217 19:59:20.379534  613002 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.35.0-rc.1
	I1217 19:59:20.384885  613002 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1217 19:59:20.384925  613002 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (9060352 bytes)
	I1217 19:59:21.836993  613002 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.35.0-rc.1: (1.457430222s)
	I1217 19:59:21.837026  613002 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22186-372245/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-rc.1 from cache
	I1217 19:59:21.837058  613002 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.35.0-rc.1
	I1217 19:59:21.837118  613002 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.35.0-rc.1
	I1217 19:59:17.466359  596882 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1217 19:59:17.466433  596882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 19:59:17.466488  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 19:59:17.497770  596882 cri.go:89] found id: "1ca89ebbb5613d16c13191bb7866cf9662b334b933e82c6860753473e8e2060b"
	I1217 19:59:17.497799  596882 cri.go:89] found id: "3d49292041fdf8c24ada2dbeb1467162d5310c3e0e8d23eefb19d520df32baab"
	I1217 19:59:17.497806  596882 cri.go:89] found id: ""
	I1217 19:59:17.497817  596882 logs.go:282] 2 containers: [1ca89ebbb5613d16c13191bb7866cf9662b334b933e82c6860753473e8e2060b 3d49292041fdf8c24ada2dbeb1467162d5310c3e0e8d23eefb19d520df32baab]
	I1217 19:59:17.497913  596882 ssh_runner.go:195] Run: which crictl
	I1217 19:59:17.502819  596882 ssh_runner.go:195] Run: which crictl
	I1217 19:59:17.506861  596882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 19:59:17.506938  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 19:59:17.539207  596882 cri.go:89] found id: ""
	I1217 19:59:17.539240  596882 logs.go:282] 0 containers: []
	W1217 19:59:17.539252  596882 logs.go:284] No container was found matching "etcd"
	I1217 19:59:17.539260  596882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 19:59:17.539327  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 19:59:17.570765  596882 cri.go:89] found id: ""
	I1217 19:59:17.570805  596882 logs.go:282] 0 containers: []
	W1217 19:59:17.570816  596882 logs.go:284] No container was found matching "coredns"
	I1217 19:59:17.570824  596882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 19:59:17.570893  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 19:59:17.604315  596882 cri.go:89] found id: "26afbca819064c614a7c269e4fbe3f73beb12920c9989c7a9adca8a87b8aee29"
	I1217 19:59:17.604338  596882 cri.go:89] found id: ""
	I1217 19:59:17.604347  596882 logs.go:282] 1 containers: [26afbca819064c614a7c269e4fbe3f73beb12920c9989c7a9adca8a87b8aee29]
	I1217 19:59:17.604425  596882 ssh_runner.go:195] Run: which crictl
	I1217 19:59:17.608662  596882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 19:59:17.608732  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 19:59:17.647593  596882 cri.go:89] found id: ""
	I1217 19:59:17.647643  596882 logs.go:282] 0 containers: []
	W1217 19:59:17.647655  596882 logs.go:284] No container was found matching "kube-proxy"
	I1217 19:59:17.647663  596882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 19:59:17.647743  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 19:59:17.684782  596882 cri.go:89] found id: "96d62cc516271a9229ae697d73c68f44ce2135124f2d88371c0189bb8de307fb"
	I1217 19:59:17.684811  596882 cri.go:89] found id: "4b0f0a789e86f48749beab0ed9a0b53d648eb2b29f2ba5276fc180b68b6b60a0"
	I1217 19:59:17.684817  596882 cri.go:89] found id: ""
	I1217 19:59:17.684828  596882 logs.go:282] 2 containers: [96d62cc516271a9229ae697d73c68f44ce2135124f2d88371c0189bb8de307fb 4b0f0a789e86f48749beab0ed9a0b53d648eb2b29f2ba5276fc180b68b6b60a0]
	I1217 19:59:17.684888  596882 ssh_runner.go:195] Run: which crictl
	I1217 19:59:17.689673  596882 ssh_runner.go:195] Run: which crictl
	I1217 19:59:17.694157  596882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 19:59:17.694216  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 19:59:17.726874  596882 cri.go:89] found id: ""
	I1217 19:59:17.726907  596882 logs.go:282] 0 containers: []
	W1217 19:59:17.726920  596882 logs.go:284] No container was found matching "kindnet"
	I1217 19:59:17.726928  596882 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1217 19:59:17.726987  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 19:59:17.763396  596882 cri.go:89] found id: ""
	I1217 19:59:17.763430  596882 logs.go:282] 0 containers: []
	W1217 19:59:17.763444  596882 logs.go:284] No container was found matching "storage-provisioner"
	I1217 19:59:17.763469  596882 logs.go:123] Gathering logs for kube-controller-manager [96d62cc516271a9229ae697d73c68f44ce2135124f2d88371c0189bb8de307fb] ...
	I1217 19:59:17.763492  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 96d62cc516271a9229ae697d73c68f44ce2135124f2d88371c0189bb8de307fb"
	I1217 19:59:17.803831  596882 logs.go:123] Gathering logs for kube-controller-manager [4b0f0a789e86f48749beab0ed9a0b53d648eb2b29f2ba5276fc180b68b6b60a0] ...
	I1217 19:59:17.803865  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4b0f0a789e86f48749beab0ed9a0b53d648eb2b29f2ba5276fc180b68b6b60a0"
	I1217 19:59:17.837649  596882 logs.go:123] Gathering logs for CRI-O ...
	I1217 19:59:17.837680  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 19:59:17.878291  596882 logs.go:123] Gathering logs for dmesg ...
	I1217 19:59:17.878342  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 19:59:17.898217  596882 logs.go:123] Gathering logs for describe nodes ...
	I1217 19:59:17.898248  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1217 19:59:23.424852  613002 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.35.0-rc.1: (1.587705503s)
	I1217 19:59:23.424886  613002 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22186-372245/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-rc.1 from cache
	I1217 19:59:23.424919  613002 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1217 19:59:23.424969  613002 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1217 19:59:24.032178  613002 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22186-372245/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1217 19:59:24.032233  613002 cache_images.go:125] Successfully loaded all cached images
	I1217 19:59:24.032242  613002 cache_images.go:94] duration metric: took 9.829729547s to LoadCachedImages
	I1217 19:59:24.032259  613002 kubeadm.go:935] updating node { 192.168.103.2 8443 v1.35.0-rc.1 crio true true} ...
	I1217 19:59:24.032379  613002 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-rc.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-832842 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-rc.1 ClusterName:no-preload-832842 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1217 19:59:24.032472  613002 ssh_runner.go:195] Run: crio config
	I1217 19:59:24.083634  613002 cni.go:84] Creating CNI manager for ""
	I1217 19:59:24.083658  613002 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1217 19:59:24.083675  613002 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1217 19:59:24.083699  613002 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.35.0-rc.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-832842 NodeName:no-preload-832842 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPa
th:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1217 19:59:24.083817  613002 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-832842"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-rc.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1217 19:59:24.083880  613002 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-rc.1
	I1217 19:59:24.092758  613002 binaries.go:54] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.35.0-rc.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.35.0-rc.1': No such file or directory
	
	Initiating transfer...
	I1217 19:59:24.092818  613002 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.35.0-rc.1
	I1217 19:59:24.101629  613002 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/amd64/kubectl.sha256
	I1217 19:59:24.101711  613002 download.go:108] Downloading: https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/22186-372245/.minikube/cache/linux/amd64/v1.35.0-rc.1/kubelet
	I1217 19:59:24.101731  613002 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl
	I1217 19:59:24.101760  613002 download.go:108] Downloading: https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/22186-372245/.minikube/cache/linux/amd64/v1.35.0-rc.1/kubeadm
	I1217 19:59:24.105867  613002 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-rc.1/kubectl': No such file or directory
	I1217 19:59:24.105898  613002 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/cache/linux/amd64/v1.35.0-rc.1/kubectl --> /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl (58597560 bytes)
	I1217 19:59:24.929276  613002 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 19:59:24.943995  613002 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-rc.1/kubelet
	I1217 19:59:24.948826  613002 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-rc.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-rc.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-rc.1/kubelet': No such file or directory
	I1217 19:59:24.948861  613002 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/cache/linux/amd64/v1.35.0-rc.1/kubelet --> /var/lib/minikube/binaries/v1.35.0-rc.1/kubelet (58110244 bytes)
	I1217 19:59:25.112743  613002 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-rc.1/kubeadm
	I1217 19:59:25.117503  613002 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-rc.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-rc.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-rc.1/kubeadm': No such file or directory
	I1217 19:59:25.117532  613002 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/cache/linux/amd64/v1.35.0-rc.1/kubeadm --> /var/lib/minikube/binaries/v1.35.0-rc.1/kubeadm (72368312 bytes)
	I1217 19:59:25.293842  613002 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1217 19:59:25.316907  613002 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (373 bytes)
	I1217 19:59:25.337191  613002 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1217 19:59:25.365368  613002 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2221 bytes)
	I1217 19:59:25.380129  613002 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1217 19:59:25.384754  613002 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 19:59:25.396562  613002 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 19:59:25.509650  613002 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 19:59:25.534570  613002 certs.go:69] Setting up /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/no-preload-832842 for IP: 192.168.103.2
	I1217 19:59:25.534596  613002 certs.go:195] generating shared ca certs ...
	I1217 19:59:25.534617  613002 certs.go:227] acquiring lock for ca certs: {Name:mk6c0a4a99609de13fb0b54aca94f9165cc7856c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 19:59:25.534810  613002 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22186-372245/.minikube/ca.key
	I1217 19:59:25.534885  613002 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22186-372245/.minikube/proxy-client-ca.key
	I1217 19:59:25.534902  613002 certs.go:257] generating profile certs ...
	I1217 19:59:25.534978  613002 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/no-preload-832842/client.key
	I1217 19:59:25.535000  613002 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/no-preload-832842/client.crt with IP's: []
	I1217 19:59:25.592742  613002 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/no-preload-832842/client.crt ...
	I1217 19:59:25.592778  613002 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/no-preload-832842/client.crt: {Name:mk42486369f77e221c9aab49a651e94775b7bae1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 19:59:25.593012  613002 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/no-preload-832842/client.key ...
	I1217 19:59:25.593039  613002 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/no-preload-832842/client.key: {Name:mk89c16ef2c5a56a360da970a678076a4bb4c340 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 19:59:25.593230  613002 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/no-preload-832842/apiserver.key.234a7b62
	I1217 19:59:25.593253  613002 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/no-preload-832842/apiserver.crt.234a7b62 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.103.2]
	I1217 19:59:25.631586  613002 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/no-preload-832842/apiserver.crt.234a7b62 ...
	I1217 19:59:25.631615  613002 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/no-preload-832842/apiserver.crt.234a7b62: {Name:mk54f8ae2cd91472c7364e13c057e39714727a19 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 19:59:25.631793  613002 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/no-preload-832842/apiserver.key.234a7b62 ...
	I1217 19:59:25.631811  613002 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/no-preload-832842/apiserver.key.234a7b62: {Name:mk3c8685339dd7678c908188a708a038b54e0f45 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 19:59:25.631912  613002 certs.go:382] copying /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/no-preload-832842/apiserver.crt.234a7b62 -> /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/no-preload-832842/apiserver.crt
	I1217 19:59:25.632002  613002 certs.go:386] copying /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/no-preload-832842/apiserver.key.234a7b62 -> /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/no-preload-832842/apiserver.key
	I1217 19:59:25.632086  613002 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/no-preload-832842/proxy-client.key
	I1217 19:59:25.632106  613002 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/no-preload-832842/proxy-client.crt with IP's: []
	I1217 19:59:25.672169  613002 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/no-preload-832842/proxy-client.crt ...
	I1217 19:59:25.672202  613002 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/no-preload-832842/proxy-client.crt: {Name:mk3db8ab41e551f463725ce6bc26b39897c9471f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 19:59:25.672387  613002 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/no-preload-832842/proxy-client.key ...
	I1217 19:59:25.672403  613002 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/no-preload-832842/proxy-client.key: {Name:mkf5e02869cc5cc7bbc69664349cd424c4d4dc44 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 19:59:25.672589  613002 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-372245/.minikube/certs/375797.pem (1338 bytes)
	W1217 19:59:25.672628  613002 certs.go:480] ignoring /home/jenkins/minikube-integration/22186-372245/.minikube/certs/375797_empty.pem, impossibly tiny 0 bytes
	I1217 19:59:25.672638  613002 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-372245/.minikube/certs/ca-key.pem (1675 bytes)
	I1217 19:59:25.672661  613002 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-372245/.minikube/certs/ca.pem (1082 bytes)
	I1217 19:59:25.672689  613002 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-372245/.minikube/certs/cert.pem (1123 bytes)
	I1217 19:59:25.672719  613002 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-372245/.minikube/certs/key.pem (1675 bytes)
	I1217 19:59:25.672773  613002 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-372245/.minikube/files/etc/ssl/certs/3757972.pem (1708 bytes)
	I1217 19:59:25.673485  613002 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1217 19:59:25.692798  613002 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1217 19:59:25.710914  613002 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1217 19:59:25.730368  613002 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1217 19:59:25.747899  613002 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/no-preload-832842/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1217 19:59:25.765372  613002 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/no-preload-832842/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1217 19:59:25.784884  613002 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/no-preload-832842/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1217 19:59:25.805128  613002 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/no-preload-832842/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1217 19:59:25.823250  613002 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 19:59:25.843373  613002 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/certs/375797.pem --> /usr/share/ca-certificates/375797.pem (1338 bytes)
	I1217 19:59:25.861323  613002 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/files/etc/ssl/certs/3757972.pem --> /usr/share/ca-certificates/3757972.pem (1708 bytes)
	I1217 19:59:25.880401  613002 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1217 19:59:25.893860  613002 ssh_runner.go:195] Run: openssl version
	I1217 19:59:25.900269  613002 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/375797.pem
	I1217 19:59:25.908747  613002 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/375797.pem /etc/ssl/certs/375797.pem
	I1217 19:59:25.917382  613002 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/375797.pem
	I1217 19:59:25.921768  613002 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 17 19:32 /usr/share/ca-certificates/375797.pem
	I1217 19:59:25.921847  613002 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/375797.pem
	I1217 19:59:25.957212  613002 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1217 19:59:25.965655  613002 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/375797.pem /etc/ssl/certs/51391683.0
	I1217 19:59:25.973751  613002 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3757972.pem
	I1217 19:59:25.981672  613002 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3757972.pem /etc/ssl/certs/3757972.pem
	I1217 19:59:25.990021  613002 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3757972.pem
	I1217 19:59:25.994127  613002 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 17 19:32 /usr/share/ca-certificates/3757972.pem
	I1217 19:59:25.994191  613002 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3757972.pem
	I1217 19:59:26.031902  613002 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1217 19:59:26.041410  613002 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/3757972.pem /etc/ssl/certs/3ec20f2e.0
	I1217 19:59:26.050131  613002 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1217 19:59:26.058752  613002 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1217 19:59:26.066986  613002 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1217 19:59:26.071354  613002 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 19:24 /usr/share/ca-certificates/minikubeCA.pem
	I1217 19:59:26.071419  613002 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1217 19:59:26.107649  613002 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1217 19:59:26.116547  613002 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1217 19:59:26.126087  613002 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1217 19:59:26.130159  613002 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1217 19:59:26.130249  613002 kubeadm.go:401] StartCluster: {Name:no-preload-832842 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:no-preload-832842 Namespace:default APIServerHAVIP: APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 19:59:26.130334  613002 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1217 19:59:26.130402  613002 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 19:59:26.159136  613002 cri.go:89] found id: ""
	I1217 19:59:26.159224  613002 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1217 19:59:26.168262  613002 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1217 19:59:26.177121  613002 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1217 19:59:26.177188  613002 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1217 19:59:26.185838  613002 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1217 19:59:26.185870  613002 kubeadm.go:158] found existing configuration files:
	
	I1217 19:59:26.185943  613002 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1217 19:59:26.194347  613002 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1217 19:59:26.194404  613002 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1217 19:59:26.202469  613002 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1217 19:59:26.211105  613002 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1217 19:59:26.211183  613002 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1217 19:59:26.219223  613002 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1217 19:59:26.228072  613002 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1217 19:59:26.228173  613002 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1217 19:59:26.239165  613002 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1217 19:59:26.248476  613002 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1217 19:59:26.248547  613002 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1217 19:59:26.256813  613002 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1217 19:59:26.294070  613002 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-rc.1
	I1217 19:59:26.294177  613002 kubeadm.go:319] [preflight] Running pre-flight checks
	I1217 19:59:26.376698  613002 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1217 19:59:26.376790  613002 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1045-gcp
	I1217 19:59:26.376837  613002 kubeadm.go:319] OS: Linux
	I1217 19:59:26.376911  613002 kubeadm.go:319] CGROUPS_CPU: enabled
	I1217 19:59:26.376955  613002 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1217 19:59:26.377049  613002 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1217 19:59:26.377171  613002 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1217 19:59:26.377236  613002 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1217 19:59:26.377301  613002 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1217 19:59:26.377374  613002 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1217 19:59:26.377457  613002 kubeadm.go:319] CGROUPS_IO: enabled
	I1217 19:59:26.440855  613002 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1217 19:59:26.441052  613002 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1217 19:59:26.441214  613002 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1217 19:59:26.456663  613002 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1217 19:59:26.460625  613002 out.go:252]   - Generating certificates and keys ...
	I1217 19:59:26.460785  613002 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1217 19:59:26.460927  613002 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1217 19:59:26.512989  613002 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1217 19:59:26.713021  613002 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1217 19:59:26.766707  613002 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1217 19:59:26.797181  613002 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1217 19:59:26.899298  613002 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1217 19:59:26.899503  613002 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost no-preload-832842] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1217 19:59:27.019263  613002 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1217 19:59:27.019477  613002 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-832842] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1217 19:59:27.062872  613002 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1217 19:59:27.094655  613002 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1217 19:59:27.159114  613002 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1217 19:59:27.159215  613002 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1217 19:59:27.243797  613002 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1217 19:59:27.325737  613002 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1217 19:59:27.389284  613002 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1217 19:59:27.532504  613002 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1217 19:59:27.584726  613002 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1217 19:59:27.585277  613002 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1217 19:59:27.589779  613002 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1217 19:59:28.530940  612025 kubeadm.go:319] [init] Using Kubernetes version: v1.28.0
	I1217 19:59:28.531027  612025 kubeadm.go:319] [preflight] Running pre-flight checks
	I1217 19:59:28.531186  612025 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1217 19:59:28.531251  612025 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1045-gcp
	I1217 19:59:28.531295  612025 kubeadm.go:319] OS: Linux
	I1217 19:59:28.531349  612025 kubeadm.go:319] CGROUPS_CPU: enabled
	I1217 19:59:28.531403  612025 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1217 19:59:28.531461  612025 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1217 19:59:28.531516  612025 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1217 19:59:28.531571  612025 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1217 19:59:28.531630  612025 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1217 19:59:28.531694  612025 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1217 19:59:28.531742  612025 kubeadm.go:319] CGROUPS_IO: enabled
	I1217 19:59:28.531831  612025 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1217 19:59:28.531950  612025 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1217 19:59:28.532057  612025 kubeadm.go:319] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1217 19:59:28.532448  612025 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1217 19:59:28.534660  612025 out.go:252]   - Generating certificates and keys ...
	I1217 19:59:28.534908  612025 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1217 19:59:28.535120  612025 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1217 19:59:28.535300  612025 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1217 19:59:28.535430  612025 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1217 19:59:28.535603  612025 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1217 19:59:28.535760  612025 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1217 19:59:28.535912  612025 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1217 19:59:28.536213  612025 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-894575] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1217 19:59:28.536375  612025 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1217 19:59:28.536687  612025 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-894575] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1217 19:59:28.536865  612025 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1217 19:59:28.536949  612025 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1217 19:59:28.537002  612025 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1217 19:59:28.537089  612025 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1217 19:59:28.537156  612025 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1217 19:59:28.537229  612025 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1217 19:59:28.537309  612025 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1217 19:59:28.537390  612025 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1217 19:59:28.537541  612025 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1217 19:59:28.537714  612025 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1217 19:59:28.538921  612025 out.go:252]   - Booting up control plane ...
	I1217 19:59:28.539069  612025 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1217 19:59:28.539215  612025 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1217 19:59:28.539310  612025 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1217 19:59:28.539456  612025 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1217 19:59:28.539577  612025 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1217 19:59:28.539625  612025 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1217 19:59:28.539867  612025 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1217 19:59:28.540020  612025 kubeadm.go:319] [apiclient] All control plane components are healthy after 5.502314 seconds
	I1217 19:59:28.540208  612025 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1217 19:59:28.540431  612025 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1217 19:59:28.540553  612025 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1217 19:59:28.540839  612025 kubeadm.go:319] [mark-control-plane] Marking the node old-k8s-version-894575 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1217 19:59:28.540930  612025 kubeadm.go:319] [bootstrap-token] Using token: 8u44gz.h67xjev6iuf0hv1v
	I1217 19:59:28.542359  612025 out.go:252]   - Configuring RBAC rules ...
	I1217 19:59:28.542565  612025 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1217 19:59:28.542716  612025 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1217 19:59:28.542912  612025 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1217 19:59:28.543102  612025 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1217 19:59:28.543254  612025 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1217 19:59:28.543379  612025 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1217 19:59:28.543553  612025 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1217 19:59:28.543622  612025 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1217 19:59:28.543684  612025 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1217 19:59:28.543693  612025 kubeadm.go:319] 
	I1217 19:59:28.543770  612025 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1217 19:59:28.543779  612025 kubeadm.go:319] 
	I1217 19:59:28.543877  612025 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1217 19:59:28.543883  612025 kubeadm.go:319] 
	I1217 19:59:28.543921  612025 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1217 19:59:28.544010  612025 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1217 19:59:28.544122  612025 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1217 19:59:28.544133  612025 kubeadm.go:319] 
	I1217 19:59:28.544213  612025 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1217 19:59:28.544222  612025 kubeadm.go:319] 
	I1217 19:59:28.544286  612025 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1217 19:59:28.544295  612025 kubeadm.go:319] 
	I1217 19:59:28.544369  612025 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1217 19:59:28.544491  612025 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1217 19:59:28.544575  612025 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1217 19:59:28.544582  612025 kubeadm.go:319] 
	I1217 19:59:28.544655  612025 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1217 19:59:28.544722  612025 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1217 19:59:28.544727  612025 kubeadm.go:319] 
	I1217 19:59:28.544796  612025 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 8u44gz.h67xjev6iuf0hv1v \
	I1217 19:59:28.544926  612025 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:8ef867ecc15c7bd9eb9f87ba84e4b5e1f9c90bbe1fbebab60bd7b5b08cd9129f \
	I1217 19:59:28.544958  612025 kubeadm.go:319] 	--control-plane 
	I1217 19:59:28.544963  612025 kubeadm.go:319] 
	I1217 19:59:28.545063  612025 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1217 19:59:28.545073  612025 kubeadm.go:319] 
	I1217 19:59:28.545214  612025 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 8u44gz.h67xjev6iuf0hv1v \
	I1217 19:59:28.545415  612025 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:8ef867ecc15c7bd9eb9f87ba84e4b5e1f9c90bbe1fbebab60bd7b5b08cd9129f 
	I1217 19:59:28.545431  612025 cni.go:84] Creating CNI manager for ""
	I1217 19:59:28.545444  612025 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1217 19:59:28.546893  612025 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1217 19:59:28.548280  612025 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1217 19:59:28.553989  612025 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.0/kubectl ...
	I1217 19:59:28.554009  612025 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2620 bytes)
	I1217 19:59:28.569060  612025 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1217 19:59:29.291176  612025 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1217 19:59:29.291251  612025 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 19:59:29.291333  612025 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes old-k8s-version-894575 minikube.k8s.io/updated_at=2025_12_17T19_59_29_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=2e96f676eb7e96389e85fe0658a4ede4c4ba6924 minikube.k8s.io/name=old-k8s-version-894575 minikube.k8s.io/primary=true
	I1217 19:59:29.302379  612025 ops.go:34] apiserver oom_adj: -16
	I1217 19:59:29.382819  612025 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 19:59:27.591559  613002 out.go:252]   - Booting up control plane ...
	I1217 19:59:27.591699  613002 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1217 19:59:27.591811  613002 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1217 19:59:27.592487  613002 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1217 19:59:27.607253  613002 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1217 19:59:27.607348  613002 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1217 19:59:27.614185  613002 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1217 19:59:27.614297  613002 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1217 19:59:27.614343  613002 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1217 19:59:27.730105  613002 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1217 19:59:27.730294  613002 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1217 19:59:28.231405  613002 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 501.49573ms
	I1217 19:59:28.234412  613002 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1217 19:59:28.234554  613002 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.103.2:8443/livez
	I1217 19:59:28.234710  613002 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1217 19:59:28.234829  613002 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1217 19:59:29.239954  613002 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.005410296s
	I1217 19:59:29.879999  613002 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 1.645384574s
	I1217 19:59:31.735880  613002 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 3.501435288s
	I1217 19:59:31.753036  613002 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1217 19:59:31.764161  613002 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1217 19:59:31.773029  613002 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1217 19:59:31.773377  613002 kubeadm.go:319] [mark-control-plane] Marking the node no-preload-832842 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1217 19:59:31.780709  613002 kubeadm.go:319] [bootstrap-token] Using token: s9jxxo.i2acucfjf8euorlv
	I1217 19:59:31.782172  613002 out.go:252]   - Configuring RBAC rules ...
	I1217 19:59:31.782360  613002 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1217 19:59:31.785445  613002 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1217 19:59:31.791153  613002 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1217 19:59:31.793759  613002 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1217 19:59:31.796294  613002 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1217 19:59:31.798768  613002 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1217 19:59:32.141800  613002 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1217 19:59:28.259205  596882 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (10.360921949s)
	W1217 19:59:28.259284  596882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Get "https://localhost:8443/api/v1/nodes?limit=500": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:48742->[::1]:8443: read: connection reset by peer
	 output: 
	** stderr ** 
	Get "https://localhost:8443/api/v1/nodes?limit=500": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:48742->[::1]:8443: read: connection reset by peer
	
	** /stderr **
	I1217 19:59:28.259300  596882 logs.go:123] Gathering logs for kube-apiserver [1ca89ebbb5613d16c13191bb7866cf9662b334b933e82c6860753473e8e2060b] ...
	I1217 19:59:28.259321  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1ca89ebbb5613d16c13191bb7866cf9662b334b933e82c6860753473e8e2060b"
	I1217 19:59:28.295803  596882 logs.go:123] Gathering logs for kube-apiserver [3d49292041fdf8c24ada2dbeb1467162d5310c3e0e8d23eefb19d520df32baab] ...
	I1217 19:59:28.295840  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3d49292041fdf8c24ada2dbeb1467162d5310c3e0e8d23eefb19d520df32baab"
	W1217 19:59:28.326266  596882 logs.go:130] failed kube-apiserver [3d49292041fdf8c24ada2dbeb1467162d5310c3e0e8d23eefb19d520df32baab]: command: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3d49292041fdf8c24ada2dbeb1467162d5310c3e0e8d23eefb19d520df32baab" /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3d49292041fdf8c24ada2dbeb1467162d5310c3e0e8d23eefb19d520df32baab": Process exited with status 1
	stdout:
	
	stderr:
	E1217 19:59:28.323499    1306 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3d49292041fdf8c24ada2dbeb1467162d5310c3e0e8d23eefb19d520df32baab\": container with ID starting with 3d49292041fdf8c24ada2dbeb1467162d5310c3e0e8d23eefb19d520df32baab not found: ID does not exist" containerID="3d49292041fdf8c24ada2dbeb1467162d5310c3e0e8d23eefb19d520df32baab"
	time="2025-12-17T19:59:28Z" level=fatal msg="rpc error: code = NotFound desc = could not find container \"3d49292041fdf8c24ada2dbeb1467162d5310c3e0e8d23eefb19d520df32baab\": container with ID starting with 3d49292041fdf8c24ada2dbeb1467162d5310c3e0e8d23eefb19d520df32baab not found: ID does not exist"
	 output: 
	** stderr ** 
	E1217 19:59:28.323499    1306 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3d49292041fdf8c24ada2dbeb1467162d5310c3e0e8d23eefb19d520df32baab\": container with ID starting with 3d49292041fdf8c24ada2dbeb1467162d5310c3e0e8d23eefb19d520df32baab not found: ID does not exist" containerID="3d49292041fdf8c24ada2dbeb1467162d5310c3e0e8d23eefb19d520df32baab"
	time="2025-12-17T19:59:28Z" level=fatal msg="rpc error: code = NotFound desc = could not find container \"3d49292041fdf8c24ada2dbeb1467162d5310c3e0e8d23eefb19d520df32baab\": container with ID starting with 3d49292041fdf8c24ada2dbeb1467162d5310c3e0e8d23eefb19d520df32baab not found: ID does not exist"
	
	** /stderr **
	I1217 19:59:28.326292  596882 logs.go:123] Gathering logs for container status ...
	I1217 19:59:28.326309  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 19:59:28.366825  596882 logs.go:123] Gathering logs for kubelet ...
	I1217 19:59:28.366865  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 19:59:28.471908  596882 logs.go:123] Gathering logs for kube-scheduler [26afbca819064c614a7c269e4fbe3f73beb12920c9989c7a9adca8a87b8aee29] ...
	I1217 19:59:28.471951  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 26afbca819064c614a7c269e4fbe3f73beb12920c9989c7a9adca8a87b8aee29"
	I1217 19:59:31.012651  596882 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1217 19:59:31.013226  596882 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 19:59:31.013285  596882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 19:59:31.013339  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 19:59:31.046918  596882 cri.go:89] found id: "1ca89ebbb5613d16c13191bb7866cf9662b334b933e82c6860753473e8e2060b"
	I1217 19:59:31.046945  596882 cri.go:89] found id: ""
	I1217 19:59:31.046966  596882 logs.go:282] 1 containers: [1ca89ebbb5613d16c13191bb7866cf9662b334b933e82c6860753473e8e2060b]
	I1217 19:59:31.047034  596882 ssh_runner.go:195] Run: which crictl
	I1217 19:59:31.052032  596882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 19:59:31.052140  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 19:59:31.085479  596882 cri.go:89] found id: ""
	I1217 19:59:31.085512  596882 logs.go:282] 0 containers: []
	W1217 19:59:31.085524  596882 logs.go:284] No container was found matching "etcd"
	I1217 19:59:31.085532  596882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 19:59:31.085603  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 19:59:31.119059  596882 cri.go:89] found id: ""
	I1217 19:59:31.119109  596882 logs.go:282] 0 containers: []
	W1217 19:59:31.119123  596882 logs.go:284] No container was found matching "coredns"
	I1217 19:59:31.119133  596882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 19:59:31.119206  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 19:59:31.152363  596882 cri.go:89] found id: "26afbca819064c614a7c269e4fbe3f73beb12920c9989c7a9adca8a87b8aee29"
	I1217 19:59:31.152389  596882 cri.go:89] found id: ""
	I1217 19:59:31.152399  596882 logs.go:282] 1 containers: [26afbca819064c614a7c269e4fbe3f73beb12920c9989c7a9adca8a87b8aee29]
	I1217 19:59:31.152462  596882 ssh_runner.go:195] Run: which crictl
	I1217 19:59:31.156830  596882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 19:59:31.156924  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 19:59:31.189580  596882 cri.go:89] found id: ""
	I1217 19:59:31.189606  596882 logs.go:282] 0 containers: []
	W1217 19:59:31.189614  596882 logs.go:284] No container was found matching "kube-proxy"
	I1217 19:59:31.189620  596882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 19:59:31.189680  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 19:59:31.222872  596882 cri.go:89] found id: "96d62cc516271a9229ae697d73c68f44ce2135124f2d88371c0189bb8de307fb"
	I1217 19:59:31.222899  596882 cri.go:89] found id: ""
	I1217 19:59:31.222909  596882 logs.go:282] 1 containers: [96d62cc516271a9229ae697d73c68f44ce2135124f2d88371c0189bb8de307fb]
	I1217 19:59:31.222986  596882 ssh_runner.go:195] Run: which crictl
	I1217 19:59:31.227977  596882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 19:59:31.228058  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 19:59:31.259327  596882 cri.go:89] found id: ""
	I1217 19:59:31.259355  596882 logs.go:282] 0 containers: []
	W1217 19:59:31.259367  596882 logs.go:284] No container was found matching "kindnet"
	I1217 19:59:31.259374  596882 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1217 19:59:31.259440  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 19:59:31.294138  596882 cri.go:89] found id: ""
	I1217 19:59:31.294171  596882 logs.go:282] 0 containers: []
	W1217 19:59:31.294185  596882 logs.go:284] No container was found matching "storage-provisioner"
	I1217 19:59:31.294199  596882 logs.go:123] Gathering logs for container status ...
	I1217 19:59:31.294216  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 19:59:31.333068  596882 logs.go:123] Gathering logs for kubelet ...
	I1217 19:59:31.333120  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 19:59:31.406667  596882 logs.go:123] Gathering logs for dmesg ...
	I1217 19:59:31.406708  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 19:59:31.424229  596882 logs.go:123] Gathering logs for describe nodes ...
	I1217 19:59:31.424261  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 19:59:31.486747  596882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 19:59:31.486770  596882 logs.go:123] Gathering logs for kube-apiserver [1ca89ebbb5613d16c13191bb7866cf9662b334b933e82c6860753473e8e2060b] ...
	I1217 19:59:31.486789  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1ca89ebbb5613d16c13191bb7866cf9662b334b933e82c6860753473e8e2060b"
	I1217 19:59:31.517894  596882 logs.go:123] Gathering logs for kube-scheduler [26afbca819064c614a7c269e4fbe3f73beb12920c9989c7a9adca8a87b8aee29] ...
	I1217 19:59:31.517929  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 26afbca819064c614a7c269e4fbe3f73beb12920c9989c7a9adca8a87b8aee29"
	I1217 19:59:31.544828  596882 logs.go:123] Gathering logs for kube-controller-manager [96d62cc516271a9229ae697d73c68f44ce2135124f2d88371c0189bb8de307fb] ...
	I1217 19:59:31.544858  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 96d62cc516271a9229ae697d73c68f44ce2135124f2d88371c0189bb8de307fb"
	I1217 19:59:31.573646  596882 logs.go:123] Gathering logs for CRI-O ...
	I1217 19:59:31.573678  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 19:59:32.557686  613002 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1217 19:59:33.141329  613002 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1217 19:59:33.142187  613002 kubeadm.go:319] 
	I1217 19:59:33.142307  613002 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1217 19:59:33.142328  613002 kubeadm.go:319] 
	I1217 19:59:33.142444  613002 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1217 19:59:33.142459  613002 kubeadm.go:319] 
	I1217 19:59:33.142496  613002 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1217 19:59:33.142590  613002 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1217 19:59:33.142672  613002 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1217 19:59:33.142684  613002 kubeadm.go:319] 
	I1217 19:59:33.142771  613002 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1217 19:59:33.142780  613002 kubeadm.go:319] 
	I1217 19:59:33.142849  613002 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1217 19:59:33.142864  613002 kubeadm.go:319] 
	I1217 19:59:33.142917  613002 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1217 19:59:33.143011  613002 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1217 19:59:33.143159  613002 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1217 19:59:33.143172  613002 kubeadm.go:319] 
	I1217 19:59:33.143302  613002 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1217 19:59:33.143424  613002 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1217 19:59:33.143433  613002 kubeadm.go:319] 
	I1217 19:59:33.143554  613002 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token s9jxxo.i2acucfjf8euorlv \
	I1217 19:59:33.143715  613002 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:8ef867ecc15c7bd9eb9f87ba84e4b5e1f9c90bbe1fbebab60bd7b5b08cd9129f \
	I1217 19:59:33.143747  613002 kubeadm.go:319] 	--control-plane 
	I1217 19:59:33.143753  613002 kubeadm.go:319] 
	I1217 19:59:33.143883  613002 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1217 19:59:33.143893  613002 kubeadm.go:319] 
	I1217 19:59:33.143981  613002 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token s9jxxo.i2acucfjf8euorlv \
	I1217 19:59:33.144120  613002 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:8ef867ecc15c7bd9eb9f87ba84e4b5e1f9c90bbe1fbebab60bd7b5b08cd9129f 
	I1217 19:59:33.145337  613002 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1045-gcp\n", err: exit status 1
	I1217 19:59:33.145459  613002 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1217 19:59:33.145485  613002 cni.go:84] Creating CNI manager for ""
	I1217 19:59:33.145497  613002 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1217 19:59:33.148015  613002 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1217 19:59:29.883834  612025 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 19:59:30.383205  612025 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 19:59:30.883777  612025 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 19:59:31.383264  612025 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 19:59:31.883260  612025 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 19:59:32.383322  612025 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 19:59:32.883645  612025 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 19:59:33.383878  612025 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 19:59:33.883359  612025 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 19:59:34.383556  612025 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 19:59:33.149190  613002 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1217 19:59:33.153632  613002 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl ...
	I1217 19:59:33.153653  613002 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2620 bytes)
	I1217 19:59:33.167865  613002 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1217 19:59:33.374925  613002 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1217 19:59:33.374980  613002 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 19:59:33.375055  613002 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-832842 minikube.k8s.io/updated_at=2025_12_17T19_59_33_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=2e96f676eb7e96389e85fe0658a4ede4c4ba6924 minikube.k8s.io/name=no-preload-832842 minikube.k8s.io/primary=true
	I1217 19:59:33.477436  613002 ops.go:34] apiserver oom_adj: -16
	I1217 19:59:33.477467  613002 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 19:59:33.977957  613002 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 19:59:34.478305  613002 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 19:59:34.977579  613002 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 19:59:35.478330  613002 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 19:59:35.977695  613002 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 19:59:36.477648  613002 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 19:59:36.978019  613002 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 19:59:34.115468  596882 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1217 19:59:34.115895  596882 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 19:59:34.115949  596882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 19:59:34.116003  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 19:59:34.145723  596882 cri.go:89] found id: "1ca89ebbb5613d16c13191bb7866cf9662b334b933e82c6860753473e8e2060b"
	I1217 19:59:34.145746  596882 cri.go:89] found id: ""
	I1217 19:59:34.145756  596882 logs.go:282] 1 containers: [1ca89ebbb5613d16c13191bb7866cf9662b334b933e82c6860753473e8e2060b]
	I1217 19:59:34.145819  596882 ssh_runner.go:195] Run: which crictl
	I1217 19:59:34.149802  596882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 19:59:34.149857  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 19:59:34.177893  596882 cri.go:89] found id: ""
	I1217 19:59:34.177924  596882 logs.go:282] 0 containers: []
	W1217 19:59:34.177937  596882 logs.go:284] No container was found matching "etcd"
	I1217 19:59:34.177947  596882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 19:59:34.178007  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 19:59:34.204154  596882 cri.go:89] found id: ""
	I1217 19:59:34.204182  596882 logs.go:282] 0 containers: []
	W1217 19:59:34.204198  596882 logs.go:284] No container was found matching "coredns"
	I1217 19:59:34.204206  596882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 19:59:34.204281  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 19:59:34.233062  596882 cri.go:89] found id: "26afbca819064c614a7c269e4fbe3f73beb12920c9989c7a9adca8a87b8aee29"
	I1217 19:59:34.233098  596882 cri.go:89] found id: ""
	I1217 19:59:34.233111  596882 logs.go:282] 1 containers: [26afbca819064c614a7c269e4fbe3f73beb12920c9989c7a9adca8a87b8aee29]
	I1217 19:59:34.233166  596882 ssh_runner.go:195] Run: which crictl
	I1217 19:59:34.237113  596882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 19:59:34.237180  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 19:59:34.265676  596882 cri.go:89] found id: ""
	I1217 19:59:34.265702  596882 logs.go:282] 0 containers: []
	W1217 19:59:34.265713  596882 logs.go:284] No container was found matching "kube-proxy"
	I1217 19:59:34.265721  596882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 19:59:34.265780  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 19:59:34.295563  596882 cri.go:89] found id: "96d62cc516271a9229ae697d73c68f44ce2135124f2d88371c0189bb8de307fb"
	I1217 19:59:34.295590  596882 cri.go:89] found id: ""
	I1217 19:59:34.295600  596882 logs.go:282] 1 containers: [96d62cc516271a9229ae697d73c68f44ce2135124f2d88371c0189bb8de307fb]
	I1217 19:59:34.295666  596882 ssh_runner.go:195] Run: which crictl
	I1217 19:59:34.299895  596882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 19:59:34.299986  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 19:59:34.328557  596882 cri.go:89] found id: ""
	I1217 19:59:34.328581  596882 logs.go:282] 0 containers: []
	W1217 19:59:34.328589  596882 logs.go:284] No container was found matching "kindnet"
	I1217 19:59:34.328594  596882 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1217 19:59:34.328658  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 19:59:34.359293  596882 cri.go:89] found id: ""
	I1217 19:59:34.359322  596882 logs.go:282] 0 containers: []
	W1217 19:59:34.359334  596882 logs.go:284] No container was found matching "storage-provisioner"
	I1217 19:59:34.359344  596882 logs.go:123] Gathering logs for container status ...
	I1217 19:59:34.359356  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 19:59:34.394043  596882 logs.go:123] Gathering logs for kubelet ...
	I1217 19:59:34.394093  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 19:59:34.475765  596882 logs.go:123] Gathering logs for dmesg ...
	I1217 19:59:34.475802  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 19:59:34.494469  596882 logs.go:123] Gathering logs for describe nodes ...
	I1217 19:59:34.494507  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 19:59:34.559885  596882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 19:59:34.559911  596882 logs.go:123] Gathering logs for kube-apiserver [1ca89ebbb5613d16c13191bb7866cf9662b334b933e82c6860753473e8e2060b] ...
	I1217 19:59:34.559925  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1ca89ebbb5613d16c13191bb7866cf9662b334b933e82c6860753473e8e2060b"
	I1217 19:59:34.590687  596882 logs.go:123] Gathering logs for kube-scheduler [26afbca819064c614a7c269e4fbe3f73beb12920c9989c7a9adca8a87b8aee29] ...
	I1217 19:59:34.590722  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 26afbca819064c614a7c269e4fbe3f73beb12920c9989c7a9adca8a87b8aee29"
	I1217 19:59:34.621883  596882 logs.go:123] Gathering logs for kube-controller-manager [96d62cc516271a9229ae697d73c68f44ce2135124f2d88371c0189bb8de307fb] ...
	I1217 19:59:34.621923  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 96d62cc516271a9229ae697d73c68f44ce2135124f2d88371c0189bb8de307fb"
	I1217 19:59:34.650696  596882 logs.go:123] Gathering logs for CRI-O ...
	I1217 19:59:34.650723  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 19:59:37.191714  596882 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1217 19:59:37.192259  596882 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 19:59:37.192321  596882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 19:59:37.192391  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 19:59:37.221286  596882 cri.go:89] found id: "1ca89ebbb5613d16c13191bb7866cf9662b334b933e82c6860753473e8e2060b"
	I1217 19:59:37.221311  596882 cri.go:89] found id: ""
	I1217 19:59:37.221322  596882 logs.go:282] 1 containers: [1ca89ebbb5613d16c13191bb7866cf9662b334b933e82c6860753473e8e2060b]
	I1217 19:59:37.221378  596882 ssh_runner.go:195] Run: which crictl
	I1217 19:59:37.225451  596882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 19:59:37.225517  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 19:59:37.253499  596882 cri.go:89] found id: ""
	I1217 19:59:37.253530  596882 logs.go:282] 0 containers: []
	W1217 19:59:37.253539  596882 logs.go:284] No container was found matching "etcd"
	I1217 19:59:37.253545  596882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 19:59:37.253594  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 19:59:37.478038  613002 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 19:59:37.557714  613002 kubeadm.go:1114] duration metric: took 4.182775928s to wait for elevateKubeSystemPrivileges
	I1217 19:59:37.557764  613002 kubeadm.go:403] duration metric: took 11.427536902s to StartCluster
	I1217 19:59:37.557788  613002 settings.go:142] acquiring lock: {Name:mk01c60672ff2b8f50b037d6096a0a4590636830 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 19:59:37.557873  613002 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22186-372245/kubeconfig
	I1217 19:59:37.559508  613002 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-372245/kubeconfig: {Name:mkbe8926b9014d2af611aee93b1188b72880b6c1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 19:59:37.559850  613002 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1217 19:59:37.560024  613002 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1217 19:59:37.560053  613002 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1217 19:59:37.560195  613002 addons.go:70] Setting storage-provisioner=true in profile "no-preload-832842"
	I1217 19:59:37.560243  613002 addons.go:239] Setting addon storage-provisioner=true in "no-preload-832842"
	I1217 19:59:37.560277  613002 host.go:66] Checking if "no-preload-832842" exists ...
	I1217 19:59:37.560274  613002 config.go:182] Loaded profile config "no-preload-832842": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1217 19:59:37.560332  613002 addons.go:70] Setting default-storageclass=true in profile "no-preload-832842"
	I1217 19:59:37.560368  613002 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-832842"
	I1217 19:59:37.560717  613002 cli_runner.go:164] Run: docker container inspect no-preload-832842 --format={{.State.Status}}
	I1217 19:59:37.560940  613002 cli_runner.go:164] Run: docker container inspect no-preload-832842 --format={{.State.Status}}
	I1217 19:59:37.563610  613002 out.go:179] * Verifying Kubernetes components...
	I1217 19:59:37.566021  613002 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 19:59:37.595287  613002 addons.go:239] Setting addon default-storageclass=true in "no-preload-832842"
	I1217 19:59:37.595453  613002 host.go:66] Checking if "no-preload-832842" exists ...
	I1217 19:59:37.595654  613002 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1217 19:59:37.596752  613002 cli_runner.go:164] Run: docker container inspect no-preload-832842 --format={{.State.Status}}
	I1217 19:59:37.598138  613002 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 19:59:37.598158  613002 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1217 19:59:37.598222  613002 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-832842
	I1217 19:59:37.634670  613002 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1217 19:59:37.634698  613002 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1217 19:59:37.634767  613002 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-832842
	I1217 19:59:37.636362  613002 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33438 SSHKeyPath:/home/jenkins/minikube-integration/22186-372245/.minikube/machines/no-preload-832842/id_rsa Username:docker}
	I1217 19:59:37.663482  613002 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33438 SSHKeyPath:/home/jenkins/minikube-integration/22186-372245/.minikube/machines/no-preload-832842/id_rsa Username:docker}
	I1217 19:59:37.709686  613002 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.103.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1217 19:59:37.748587  613002 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 19:59:37.774123  613002 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 19:59:37.805603  613002 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1217 19:59:37.895739  613002 start.go:977] {"host.minikube.internal": 192.168.103.1} host record injected into CoreDNS's ConfigMap
	I1217 19:59:37.897094  613002 node_ready.go:35] waiting up to 6m0s for node "no-preload-832842" to be "Ready" ...
	I1217 19:59:38.111071  613002 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1217 19:59:34.883298  612025 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 19:59:35.383802  612025 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 19:59:35.883833  612025 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 19:59:36.383846  612025 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 19:59:36.882981  612025 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 19:59:37.383781  612025 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 19:59:37.883281  612025 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 19:59:38.383909  612025 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 19:59:38.882975  612025 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 19:59:39.383488  612025 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 19:59:39.882955  612025 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 19:59:38.112333  613002 addons.go:530] duration metric: took 552.27323ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1217 19:59:38.400988  613002 kapi.go:214] "coredns" deployment in "kube-system" namespace and "no-preload-832842" context rescaled to 1 replicas
	W1217 19:59:39.900027  613002 node_ready.go:57] node "no-preload-832842" has "Ready":"False" status (will retry)
	W1217 19:59:41.900212  613002 node_ready.go:57] node "no-preload-832842" has "Ready":"False" status (will retry)
	I1217 19:59:37.283518  596882 cri.go:89] found id: ""
	I1217 19:59:37.283547  596882 logs.go:282] 0 containers: []
	W1217 19:59:37.283557  596882 logs.go:284] No container was found matching "coredns"
	I1217 19:59:37.283564  596882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 19:59:37.283628  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 19:59:37.315309  596882 cri.go:89] found id: "26afbca819064c614a7c269e4fbe3f73beb12920c9989c7a9adca8a87b8aee29"
	I1217 19:59:37.315342  596882 cri.go:89] found id: ""
	I1217 19:59:37.315353  596882 logs.go:282] 1 containers: [26afbca819064c614a7c269e4fbe3f73beb12920c9989c7a9adca8a87b8aee29]
	I1217 19:59:37.315430  596882 ssh_runner.go:195] Run: which crictl
	I1217 19:59:37.319898  596882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 19:59:37.319984  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 19:59:37.348269  596882 cri.go:89] found id: ""
	I1217 19:59:37.348296  596882 logs.go:282] 0 containers: []
	W1217 19:59:37.348305  596882 logs.go:284] No container was found matching "kube-proxy"
	I1217 19:59:37.348310  596882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 19:59:37.348360  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 19:59:37.377023  596882 cri.go:89] found id: "96d62cc516271a9229ae697d73c68f44ce2135124f2d88371c0189bb8de307fb"
	I1217 19:59:37.377048  596882 cri.go:89] found id: ""
	I1217 19:59:37.377059  596882 logs.go:282] 1 containers: [96d62cc516271a9229ae697d73c68f44ce2135124f2d88371c0189bb8de307fb]
	I1217 19:59:37.377144  596882 ssh_runner.go:195] Run: which crictl
	I1217 19:59:37.381442  596882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 19:59:37.381506  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 19:59:37.414133  596882 cri.go:89] found id: ""
	I1217 19:59:37.414184  596882 logs.go:282] 0 containers: []
	W1217 19:59:37.414197  596882 logs.go:284] No container was found matching "kindnet"
	I1217 19:59:37.414205  596882 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1217 19:59:37.414266  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 19:59:37.443819  596882 cri.go:89] found id: ""
	I1217 19:59:37.443851  596882 logs.go:282] 0 containers: []
	W1217 19:59:37.443863  596882 logs.go:284] No container was found matching "storage-provisioner"
	I1217 19:59:37.443876  596882 logs.go:123] Gathering logs for kubelet ...
	I1217 19:59:37.443891  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 19:59:37.522099  596882 logs.go:123] Gathering logs for dmesg ...
	I1217 19:59:37.522141  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 19:59:37.545338  596882 logs.go:123] Gathering logs for describe nodes ...
	I1217 19:59:37.545376  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 19:59:37.646905  596882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 19:59:37.646929  596882 logs.go:123] Gathering logs for kube-apiserver [1ca89ebbb5613d16c13191bb7866cf9662b334b933e82c6860753473e8e2060b] ...
	I1217 19:59:37.646944  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1ca89ebbb5613d16c13191bb7866cf9662b334b933e82c6860753473e8e2060b"
	I1217 19:59:37.704266  596882 logs.go:123] Gathering logs for kube-scheduler [26afbca819064c614a7c269e4fbe3f73beb12920c9989c7a9adca8a87b8aee29] ...
	I1217 19:59:37.704362  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 26afbca819064c614a7c269e4fbe3f73beb12920c9989c7a9adca8a87b8aee29"
	I1217 19:59:37.744589  596882 logs.go:123] Gathering logs for kube-controller-manager [96d62cc516271a9229ae697d73c68f44ce2135124f2d88371c0189bb8de307fb] ...
	I1217 19:59:37.744633  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 96d62cc516271a9229ae697d73c68f44ce2135124f2d88371c0189bb8de307fb"
	I1217 19:59:37.781022  596882 logs.go:123] Gathering logs for CRI-O ...
	I1217 19:59:37.781051  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 19:59:37.839983  596882 logs.go:123] Gathering logs for container status ...
	I1217 19:59:37.840030  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 19:59:40.383195  596882 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1217 19:59:40.383600  596882 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 19:59:40.383660  596882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 19:59:40.383740  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 19:59:40.416354  596882 cri.go:89] found id: "1ca89ebbb5613d16c13191bb7866cf9662b334b933e82c6860753473e8e2060b"
	I1217 19:59:40.416380  596882 cri.go:89] found id: ""
	I1217 19:59:40.416391  596882 logs.go:282] 1 containers: [1ca89ebbb5613d16c13191bb7866cf9662b334b933e82c6860753473e8e2060b]
	I1217 19:59:40.416468  596882 ssh_runner.go:195] Run: which crictl
	I1217 19:59:40.421551  596882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 19:59:40.421618  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 19:59:40.453001  596882 cri.go:89] found id: ""
	I1217 19:59:40.453026  596882 logs.go:282] 0 containers: []
	W1217 19:59:40.453035  596882 logs.go:284] No container was found matching "etcd"
	I1217 19:59:40.453040  596882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 19:59:40.453130  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 19:59:40.482820  596882 cri.go:89] found id: ""
	I1217 19:59:40.482849  596882 logs.go:282] 0 containers: []
	W1217 19:59:40.482860  596882 logs.go:284] No container was found matching "coredns"
	I1217 19:59:40.482868  596882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 19:59:40.482941  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 19:59:40.513059  596882 cri.go:89] found id: "26afbca819064c614a7c269e4fbe3f73beb12920c9989c7a9adca8a87b8aee29"
	I1217 19:59:40.513124  596882 cri.go:89] found id: ""
	I1217 19:59:40.513136  596882 logs.go:282] 1 containers: [26afbca819064c614a7c269e4fbe3f73beb12920c9989c7a9adca8a87b8aee29]
	I1217 19:59:40.513219  596882 ssh_runner.go:195] Run: which crictl
	I1217 19:59:40.517585  596882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 19:59:40.517647  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 19:59:40.547265  596882 cri.go:89] found id: ""
	I1217 19:59:40.547298  596882 logs.go:282] 0 containers: []
	W1217 19:59:40.547311  596882 logs.go:284] No container was found matching "kube-proxy"
	I1217 19:59:40.547319  596882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 19:59:40.547390  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 19:59:40.575327  596882 cri.go:89] found id: "1dee5fecff78a1a61126f20ed261adbf0b690830e4ecf50ef50f99d3aaad09cb"
	I1217 19:59:40.575350  596882 cri.go:89] found id: "96d62cc516271a9229ae697d73c68f44ce2135124f2d88371c0189bb8de307fb"
	I1217 19:59:40.575353  596882 cri.go:89] found id: ""
	I1217 19:59:40.575362  596882 logs.go:282] 2 containers: [1dee5fecff78a1a61126f20ed261adbf0b690830e4ecf50ef50f99d3aaad09cb 96d62cc516271a9229ae697d73c68f44ce2135124f2d88371c0189bb8de307fb]
	I1217 19:59:40.575428  596882 ssh_runner.go:195] Run: which crictl
	I1217 19:59:40.579702  596882 ssh_runner.go:195] Run: which crictl
	I1217 19:59:40.583810  596882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 19:59:40.583895  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 19:59:40.612529  596882 cri.go:89] found id: ""
	I1217 19:59:40.612556  596882 logs.go:282] 0 containers: []
	W1217 19:59:40.612565  596882 logs.go:284] No container was found matching "kindnet"
	I1217 19:59:40.612571  596882 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1217 19:59:40.612626  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 19:59:40.638754  596882 cri.go:89] found id: ""
	I1217 19:59:40.638784  596882 logs.go:282] 0 containers: []
	W1217 19:59:40.638793  596882 logs.go:284] No container was found matching "storage-provisioner"
	I1217 19:59:40.638809  596882 logs.go:123] Gathering logs for dmesg ...
	I1217 19:59:40.638820  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 19:59:40.656628  596882 logs.go:123] Gathering logs for describe nodes ...
	I1217 19:59:40.656660  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 19:59:40.718279  596882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 19:59:40.718304  596882 logs.go:123] Gathering logs for kube-apiserver [1ca89ebbb5613d16c13191bb7866cf9662b334b933e82c6860753473e8e2060b] ...
	I1217 19:59:40.718320  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1ca89ebbb5613d16c13191bb7866cf9662b334b933e82c6860753473e8e2060b"
	I1217 19:59:40.749816  596882 logs.go:123] Gathering logs for CRI-O ...
	I1217 19:59:40.749848  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 19:59:40.789291  596882 logs.go:123] Gathering logs for container status ...
	I1217 19:59:40.789332  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 19:59:40.823447  596882 logs.go:123] Gathering logs for kubelet ...
	I1217 19:59:40.823474  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 19:59:40.891268  596882 logs.go:123] Gathering logs for kube-scheduler [26afbca819064c614a7c269e4fbe3f73beb12920c9989c7a9adca8a87b8aee29] ...
	I1217 19:59:40.891304  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 26afbca819064c614a7c269e4fbe3f73beb12920c9989c7a9adca8a87b8aee29"
	I1217 19:59:40.921938  596882 logs.go:123] Gathering logs for kube-controller-manager [1dee5fecff78a1a61126f20ed261adbf0b690830e4ecf50ef50f99d3aaad09cb] ...
	I1217 19:59:40.921969  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1dee5fecff78a1a61126f20ed261adbf0b690830e4ecf50ef50f99d3aaad09cb"
	I1217 19:59:40.950934  596882 logs.go:123] Gathering logs for kube-controller-manager [96d62cc516271a9229ae697d73c68f44ce2135124f2d88371c0189bb8de307fb] ...
	I1217 19:59:40.950968  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 96d62cc516271a9229ae697d73c68f44ce2135124f2d88371c0189bb8de307fb"
	I1217 19:59:40.382939  612025 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 19:59:40.883642  612025 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 19:59:41.383305  612025 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 19:59:41.882977  612025 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 19:59:42.383458  612025 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 19:59:42.470150  612025 kubeadm.go:1114] duration metric: took 13.178959144s to wait for elevateKubeSystemPrivileges
	I1217 19:59:42.470191  612025 kubeadm.go:403] duration metric: took 23.971663614s to StartCluster
	I1217 19:59:42.470212  612025 settings.go:142] acquiring lock: {Name:mk01c60672ff2b8f50b037d6096a0a4590636830 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 19:59:42.470292  612025 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22186-372245/kubeconfig
	I1217 19:59:42.471672  612025 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-372245/kubeconfig: {Name:mkbe8926b9014d2af611aee93b1188b72880b6c1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 19:59:42.471932  612025 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1217 19:59:42.471960  612025 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1217 19:59:42.471933  612025 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1217 19:59:42.472029  612025 addons.go:70] Setting storage-provisioner=true in profile "old-k8s-version-894575"
	I1217 19:59:42.472190  612025 config.go:182] Loaded profile config "old-k8s-version-894575": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1217 19:59:42.472206  612025 addons.go:239] Setting addon storage-provisioner=true in "old-k8s-version-894575"
	I1217 19:59:42.472035  612025 addons.go:70] Setting default-storageclass=true in profile "old-k8s-version-894575"
	I1217 19:59:42.472287  612025 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-894575"
	I1217 19:59:42.472255  612025 host.go:66] Checking if "old-k8s-version-894575" exists ...
	I1217 19:59:42.472613  612025 cli_runner.go:164] Run: docker container inspect old-k8s-version-894575 --format={{.State.Status}}
	I1217 19:59:42.472868  612025 cli_runner.go:164] Run: docker container inspect old-k8s-version-894575 --format={{.State.Status}}
	I1217 19:59:42.473672  612025 out.go:179] * Verifying Kubernetes components...
	I1217 19:59:42.475009  612025 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 19:59:42.497695  612025 addons.go:239] Setting addon default-storageclass=true in "old-k8s-version-894575"
	I1217 19:59:42.497748  612025 host.go:66] Checking if "old-k8s-version-894575" exists ...
	I1217 19:59:42.498230  612025 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1217 19:59:42.498241  612025 cli_runner.go:164] Run: docker container inspect old-k8s-version-894575 --format={{.State.Status}}
	I1217 19:59:42.499515  612025 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 19:59:42.499536  612025 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1217 19:59:42.499590  612025 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-894575
	I1217 19:59:42.530066  612025 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1217 19:59:42.530117  612025 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1217 19:59:42.530192  612025 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-894575
	I1217 19:59:42.530842  612025 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/22186-372245/.minikube/machines/old-k8s-version-894575/id_rsa Username:docker}
	I1217 19:59:42.553294  612025 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/22186-372245/.minikube/machines/old-k8s-version-894575/id_rsa Username:docker}
	I1217 19:59:42.568256  612025 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1217 19:59:42.620733  612025 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 19:59:42.650189  612025 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 19:59:42.673914  612025 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1217 19:59:42.792400  612025 start.go:977] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1217 19:59:42.793909  612025 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-894575" to be "Ready" ...
	I1217 19:59:43.037351  612025 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1217 19:59:43.038583  612025 addons.go:530] duration metric: took 566.615369ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1217 19:59:43.297464  612025 kapi.go:214] "coredns" deployment in "kube-system" namespace and "old-k8s-version-894575" context rescaled to 1 replicas
	W1217 19:59:44.798596  612025 node_ready.go:57] node "old-k8s-version-894575" has "Ready":"False" status (will retry)
	W1217 19:59:43.901710  613002 node_ready.go:57] node "no-preload-832842" has "Ready":"False" status (will retry)
	W1217 19:59:46.400051  613002 node_ready.go:57] node "no-preload-832842" has "Ready":"False" status (will retry)
	I1217 19:59:43.480183  596882 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1217 19:59:43.480647  596882 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 19:59:43.480726  596882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 19:59:43.480783  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 19:59:43.518232  596882 cri.go:89] found id: "1ca89ebbb5613d16c13191bb7866cf9662b334b933e82c6860753473e8e2060b"
	I1217 19:59:43.518262  596882 cri.go:89] found id: ""
	I1217 19:59:43.518273  596882 logs.go:282] 1 containers: [1ca89ebbb5613d16c13191bb7866cf9662b334b933e82c6860753473e8e2060b]
	I1217 19:59:43.518337  596882 ssh_runner.go:195] Run: which crictl
	I1217 19:59:43.523736  596882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 19:59:43.523817  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 19:59:43.565315  596882 cri.go:89] found id: ""
	I1217 19:59:43.565344  596882 logs.go:282] 0 containers: []
	W1217 19:59:43.565356  596882 logs.go:284] No container was found matching "etcd"
	I1217 19:59:43.565363  596882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 19:59:43.565430  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 19:59:43.601570  596882 cri.go:89] found id: ""
	I1217 19:59:43.601597  596882 logs.go:282] 0 containers: []
	W1217 19:59:43.601608  596882 logs.go:284] No container was found matching "coredns"
	I1217 19:59:43.601618  596882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 19:59:43.601693  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 19:59:43.636751  596882 cri.go:89] found id: "26afbca819064c614a7c269e4fbe3f73beb12920c9989c7a9adca8a87b8aee29"
	I1217 19:59:43.636772  596882 cri.go:89] found id: ""
	I1217 19:59:43.636787  596882 logs.go:282] 1 containers: [26afbca819064c614a7c269e4fbe3f73beb12920c9989c7a9adca8a87b8aee29]
	I1217 19:59:43.636851  596882 ssh_runner.go:195] Run: which crictl
	I1217 19:59:43.641329  596882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 19:59:43.641408  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 19:59:43.671328  596882 cri.go:89] found id: ""
	I1217 19:59:43.671364  596882 logs.go:282] 0 containers: []
	W1217 19:59:43.671377  596882 logs.go:284] No container was found matching "kube-proxy"
	I1217 19:59:43.671385  596882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 19:59:43.671444  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 19:59:43.705020  596882 cri.go:89] found id: "1dee5fecff78a1a61126f20ed261adbf0b690830e4ecf50ef50f99d3aaad09cb"
	I1217 19:59:43.705046  596882 cri.go:89] found id: "96d62cc516271a9229ae697d73c68f44ce2135124f2d88371c0189bb8de307fb"
	I1217 19:59:43.705052  596882 cri.go:89] found id: ""
	I1217 19:59:43.705062  596882 logs.go:282] 2 containers: [1dee5fecff78a1a61126f20ed261adbf0b690830e4ecf50ef50f99d3aaad09cb 96d62cc516271a9229ae697d73c68f44ce2135124f2d88371c0189bb8de307fb]
	I1217 19:59:43.705145  596882 ssh_runner.go:195] Run: which crictl
	I1217 19:59:43.709433  596882 ssh_runner.go:195] Run: which crictl
	I1217 19:59:43.713466  596882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 19:59:43.713539  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 19:59:43.746015  596882 cri.go:89] found id: ""
	I1217 19:59:43.746045  596882 logs.go:282] 0 containers: []
	W1217 19:59:43.746057  596882 logs.go:284] No container was found matching "kindnet"
	I1217 19:59:43.746064  596882 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1217 19:59:43.746168  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 19:59:43.785258  596882 cri.go:89] found id: ""
	I1217 19:59:43.785290  596882 logs.go:282] 0 containers: []
	W1217 19:59:43.785303  596882 logs.go:284] No container was found matching "storage-provisioner"
	I1217 19:59:43.785321  596882 logs.go:123] Gathering logs for dmesg ...
	I1217 19:59:43.785336  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 19:59:43.808918  596882 logs.go:123] Gathering logs for kube-controller-manager [1dee5fecff78a1a61126f20ed261adbf0b690830e4ecf50ef50f99d3aaad09cb] ...
	I1217 19:59:43.808960  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1dee5fecff78a1a61126f20ed261adbf0b690830e4ecf50ef50f99d3aaad09cb"
	I1217 19:59:43.845935  596882 logs.go:123] Gathering logs for kube-controller-manager [96d62cc516271a9229ae697d73c68f44ce2135124f2d88371c0189bb8de307fb] ...
	I1217 19:59:43.845971  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 96d62cc516271a9229ae697d73c68f44ce2135124f2d88371c0189bb8de307fb"
	I1217 19:59:43.881348  596882 logs.go:123] Gathering logs for CRI-O ...
	I1217 19:59:43.881385  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 19:59:43.944739  596882 logs.go:123] Gathering logs for container status ...
	I1217 19:59:43.944779  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 19:59:43.986673  596882 logs.go:123] Gathering logs for kubelet ...
	I1217 19:59:43.986716  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 19:59:44.080162  596882 logs.go:123] Gathering logs for describe nodes ...
	I1217 19:59:44.080222  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 19:59:44.142775  596882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 19:59:44.142795  596882 logs.go:123] Gathering logs for kube-apiserver [1ca89ebbb5613d16c13191bb7866cf9662b334b933e82c6860753473e8e2060b] ...
	I1217 19:59:44.142810  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1ca89ebbb5613d16c13191bb7866cf9662b334b933e82c6860753473e8e2060b"
	I1217 19:59:44.176881  596882 logs.go:123] Gathering logs for kube-scheduler [26afbca819064c614a7c269e4fbe3f73beb12920c9989c7a9adca8a87b8aee29] ...
	I1217 19:59:44.176919  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 26afbca819064c614a7c269e4fbe3f73beb12920c9989c7a9adca8a87b8aee29"
	I1217 19:59:46.710466  596882 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1217 19:59:46.711004  596882 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 19:59:46.711063  596882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 19:59:46.711165  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 19:59:46.740274  596882 cri.go:89] found id: "1ca89ebbb5613d16c13191bb7866cf9662b334b933e82c6860753473e8e2060b"
	I1217 19:59:46.740302  596882 cri.go:89] found id: ""
	I1217 19:59:46.740316  596882 logs.go:282] 1 containers: [1ca89ebbb5613d16c13191bb7866cf9662b334b933e82c6860753473e8e2060b]
	I1217 19:59:46.740437  596882 ssh_runner.go:195] Run: which crictl
	I1217 19:59:46.744712  596882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 19:59:46.744795  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 19:59:46.773367  596882 cri.go:89] found id: ""
	I1217 19:59:46.773395  596882 logs.go:282] 0 containers: []
	W1217 19:59:46.773408  596882 logs.go:284] No container was found matching "etcd"
	I1217 19:59:46.773416  596882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 19:59:46.773492  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 19:59:46.801877  596882 cri.go:89] found id: ""
	I1217 19:59:46.801910  596882 logs.go:282] 0 containers: []
	W1217 19:59:46.801921  596882 logs.go:284] No container was found matching "coredns"
	I1217 19:59:46.801929  596882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 19:59:46.801992  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 19:59:46.830209  596882 cri.go:89] found id: "26afbca819064c614a7c269e4fbe3f73beb12920c9989c7a9adca8a87b8aee29"
	I1217 19:59:46.830250  596882 cri.go:89] found id: ""
	I1217 19:59:46.830260  596882 logs.go:282] 1 containers: [26afbca819064c614a7c269e4fbe3f73beb12920c9989c7a9adca8a87b8aee29]
	I1217 19:59:46.830319  596882 ssh_runner.go:195] Run: which crictl
	I1217 19:59:46.834571  596882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 19:59:46.834639  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 19:59:46.863489  596882 cri.go:89] found id: ""
	I1217 19:59:46.863518  596882 logs.go:282] 0 containers: []
	W1217 19:59:46.863530  596882 logs.go:284] No container was found matching "kube-proxy"
	I1217 19:59:46.863537  596882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 19:59:46.863590  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 19:59:46.892277  596882 cri.go:89] found id: "1dee5fecff78a1a61126f20ed261adbf0b690830e4ecf50ef50f99d3aaad09cb"
	I1217 19:59:46.892303  596882 cri.go:89] found id: "96d62cc516271a9229ae697d73c68f44ce2135124f2d88371c0189bb8de307fb"
	I1217 19:59:46.892540  596882 cri.go:89] found id: ""
	I1217 19:59:46.892767  596882 logs.go:282] 2 containers: [1dee5fecff78a1a61126f20ed261adbf0b690830e4ecf50ef50f99d3aaad09cb 96d62cc516271a9229ae697d73c68f44ce2135124f2d88371c0189bb8de307fb]
	I1217 19:59:46.892864  596882 ssh_runner.go:195] Run: which crictl
	I1217 19:59:46.897970  596882 ssh_runner.go:195] Run: which crictl
	I1217 19:59:46.902175  596882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 19:59:46.902242  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 19:59:46.930663  596882 cri.go:89] found id: ""
	I1217 19:59:46.930695  596882 logs.go:282] 0 containers: []
	W1217 19:59:46.930708  596882 logs.go:284] No container was found matching "kindnet"
	I1217 19:59:46.930721  596882 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1217 19:59:46.930805  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 19:59:46.960950  596882 cri.go:89] found id: ""
	I1217 19:59:46.960981  596882 logs.go:282] 0 containers: []
	W1217 19:59:46.960992  596882 logs.go:284] No container was found matching "storage-provisioner"
	I1217 19:59:46.961013  596882 logs.go:123] Gathering logs for describe nodes ...
	I1217 19:59:46.961033  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 19:59:47.018065  596882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 19:59:47.018120  596882 logs.go:123] Gathering logs for kube-apiserver [1ca89ebbb5613d16c13191bb7866cf9662b334b933e82c6860753473e8e2060b] ...
	I1217 19:59:47.018143  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1ca89ebbb5613d16c13191bb7866cf9662b334b933e82c6860753473e8e2060b"
	I1217 19:59:47.049449  596882 logs.go:123] Gathering logs for kube-scheduler [26afbca819064c614a7c269e4fbe3f73beb12920c9989c7a9adca8a87b8aee29] ...
	I1217 19:59:47.049483  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 26afbca819064c614a7c269e4fbe3f73beb12920c9989c7a9adca8a87b8aee29"
	I1217 19:59:47.078285  596882 logs.go:123] Gathering logs for kube-controller-manager [1dee5fecff78a1a61126f20ed261adbf0b690830e4ecf50ef50f99d3aaad09cb] ...
	I1217 19:59:47.078319  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1dee5fecff78a1a61126f20ed261adbf0b690830e4ecf50ef50f99d3aaad09cb"
	I1217 19:59:47.104846  596882 logs.go:123] Gathering logs for kube-controller-manager [96d62cc516271a9229ae697d73c68f44ce2135124f2d88371c0189bb8de307fb] ...
	I1217 19:59:47.104873  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 96d62cc516271a9229ae697d73c68f44ce2135124f2d88371c0189bb8de307fb"
	I1217 19:59:47.133138  596882 logs.go:123] Gathering logs for CRI-O ...
	I1217 19:59:47.133170  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 19:59:47.173109  596882 logs.go:123] Gathering logs for kubelet ...
	I1217 19:59:47.173143  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 19:59:47.239232  596882 logs.go:123] Gathering logs for dmesg ...
	I1217 19:59:47.239269  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 19:59:47.255744  596882 logs.go:123] Gathering logs for container status ...
	I1217 19:59:47.255774  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1217 19:59:47.297786  612025 node_ready.go:57] node "old-k8s-version-894575" has "Ready":"False" status (will retry)
	W1217 19:59:49.798061  612025 node_ready.go:57] node "old-k8s-version-894575" has "Ready":"False" status (will retry)
	W1217 19:59:48.400236  613002 node_ready.go:57] node "no-preload-832842" has "Ready":"False" status (will retry)
	W1217 19:59:50.400424  613002 node_ready.go:57] node "no-preload-832842" has "Ready":"False" status (will retry)
	I1217 19:59:50.900321  613002 node_ready.go:49] node "no-preload-832842" is "Ready"
	I1217 19:59:50.900360  613002 node_ready.go:38] duration metric: took 13.003221681s for node "no-preload-832842" to be "Ready" ...
	I1217 19:59:50.900388  613002 api_server.go:52] waiting for apiserver process to appear ...
	I1217 19:59:50.900450  613002 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 19:59:50.913774  613002 api_server.go:72] duration metric: took 13.353848727s to wait for apiserver process to appear ...
	I1217 19:59:50.913809  613002 api_server.go:88] waiting for apiserver healthz status ...
	I1217 19:59:50.913828  613002 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1217 19:59:50.919172  613002 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1217 19:59:50.920038  613002 api_server.go:141] control plane version: v1.35.0-rc.1
	I1217 19:59:50.920061  613002 api_server.go:131] duration metric: took 6.246549ms to wait for apiserver health ...
	I1217 19:59:50.920071  613002 system_pods.go:43] waiting for kube-system pods to appear ...
	I1217 19:59:50.923436  613002 system_pods.go:59] 8 kube-system pods found
	I1217 19:59:50.923465  613002 system_pods.go:61] "coredns-7d764666f9-988jw" [2e2dabc4-5e32-46d9-a290-4dec02241395] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 19:59:50.923471  613002 system_pods.go:61] "etcd-no-preload-832842" [9a0980a1-4707-453a-a6b2-1c3cc74d35a6] Running
	I1217 19:59:50.923477  613002 system_pods.go:61] "kindnet-t5x5v" [4d27aa06-e030-44a0-880e-a5ae02e7b951] Running
	I1217 19:59:50.923480  613002 system_pods.go:61] "kube-apiserver-no-preload-832842" [092c1856-6e6d-4658-863b-1dd0cf168837] Running
	I1217 19:59:50.923485  613002 system_pods.go:61] "kube-controller-manager-no-preload-832842" [f73ec95a-6e24-41c1-881a-eb6936bbb4a7] Running
	I1217 19:59:50.923488  613002 system_pods.go:61] "kube-proxy-jc5dd" [5c5c87dc-6dbc-4133-9e90-d0650e6a5048] Running
	I1217 19:59:50.923492  613002 system_pods.go:61] "kube-scheduler-no-preload-832842" [36446c37-e6da-44fe-93aa-b30ba79a4db9] Running
	I1217 19:59:50.923501  613002 system_pods.go:61] "storage-provisioner" [d36df8ec-ccab-401f-9ab8-c2a6c8f1e5ed] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1217 19:59:50.923506  613002 system_pods.go:74] duration metric: took 3.398333ms to wait for pod list to return data ...
	I1217 19:59:50.923516  613002 default_sa.go:34] waiting for default service account to be created ...
	I1217 19:59:50.925827  613002 default_sa.go:45] found service account: "default"
	I1217 19:59:50.925845  613002 default_sa.go:55] duration metric: took 2.321212ms for default service account to be created ...
	I1217 19:59:50.925853  613002 system_pods.go:116] waiting for k8s-apps to be running ...
	I1217 19:59:50.928542  613002 system_pods.go:86] 8 kube-system pods found
	I1217 19:59:50.928579  613002 system_pods.go:89] "coredns-7d764666f9-988jw" [2e2dabc4-5e32-46d9-a290-4dec02241395] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 19:59:50.928589  613002 system_pods.go:89] "etcd-no-preload-832842" [9a0980a1-4707-453a-a6b2-1c3cc74d35a6] Running
	I1217 19:59:50.928599  613002 system_pods.go:89] "kindnet-t5x5v" [4d27aa06-e030-44a0-880e-a5ae02e7b951] Running
	I1217 19:59:50.928610  613002 system_pods.go:89] "kube-apiserver-no-preload-832842" [092c1856-6e6d-4658-863b-1dd0cf168837] Running
	I1217 19:59:50.928616  613002 system_pods.go:89] "kube-controller-manager-no-preload-832842" [f73ec95a-6e24-41c1-881a-eb6936bbb4a7] Running
	I1217 19:59:50.928622  613002 system_pods.go:89] "kube-proxy-jc5dd" [5c5c87dc-6dbc-4133-9e90-d0650e6a5048] Running
	I1217 19:59:50.928627  613002 system_pods.go:89] "kube-scheduler-no-preload-832842" [36446c37-e6da-44fe-93aa-b30ba79a4db9] Running
	I1217 19:59:50.928634  613002 system_pods.go:89] "storage-provisioner" [d36df8ec-ccab-401f-9ab8-c2a6c8f1e5ed] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1217 19:59:50.928677  613002 retry.go:31] will retry after 250.930958ms: missing components: kube-dns
	I1217 19:59:51.184136  613002 system_pods.go:86] 8 kube-system pods found
	I1217 19:59:51.184174  613002 system_pods.go:89] "coredns-7d764666f9-988jw" [2e2dabc4-5e32-46d9-a290-4dec02241395] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 19:59:51.184183  613002 system_pods.go:89] "etcd-no-preload-832842" [9a0980a1-4707-453a-a6b2-1c3cc74d35a6] Running
	I1217 19:59:51.184192  613002 system_pods.go:89] "kindnet-t5x5v" [4d27aa06-e030-44a0-880e-a5ae02e7b951] Running
	I1217 19:59:51.184201  613002 system_pods.go:89] "kube-apiserver-no-preload-832842" [092c1856-6e6d-4658-863b-1dd0cf168837] Running
	I1217 19:59:51.184207  613002 system_pods.go:89] "kube-controller-manager-no-preload-832842" [f73ec95a-6e24-41c1-881a-eb6936bbb4a7] Running
	I1217 19:59:51.184212  613002 system_pods.go:89] "kube-proxy-jc5dd" [5c5c87dc-6dbc-4133-9e90-d0650e6a5048] Running
	I1217 19:59:51.184218  613002 system_pods.go:89] "kube-scheduler-no-preload-832842" [36446c37-e6da-44fe-93aa-b30ba79a4db9] Running
	I1217 19:59:51.184226  613002 system_pods.go:89] "storage-provisioner" [d36df8ec-ccab-401f-9ab8-c2a6c8f1e5ed] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1217 19:59:51.184250  613002 retry.go:31] will retry after 376.252468ms: missing components: kube-dns
	I1217 19:59:51.564522  613002 system_pods.go:86] 8 kube-system pods found
	I1217 19:59:51.564553  613002 system_pods.go:89] "coredns-7d764666f9-988jw" [2e2dabc4-5e32-46d9-a290-4dec02241395] Running
	I1217 19:59:51.564558  613002 system_pods.go:89] "etcd-no-preload-832842" [9a0980a1-4707-453a-a6b2-1c3cc74d35a6] Running
	I1217 19:59:51.564562  613002 system_pods.go:89] "kindnet-t5x5v" [4d27aa06-e030-44a0-880e-a5ae02e7b951] Running
	I1217 19:59:51.564566  613002 system_pods.go:89] "kube-apiserver-no-preload-832842" [092c1856-6e6d-4658-863b-1dd0cf168837] Running
	I1217 19:59:51.564577  613002 system_pods.go:89] "kube-controller-manager-no-preload-832842" [f73ec95a-6e24-41c1-881a-eb6936bbb4a7] Running
	I1217 19:59:51.564580  613002 system_pods.go:89] "kube-proxy-jc5dd" [5c5c87dc-6dbc-4133-9e90-d0650e6a5048] Running
	I1217 19:59:51.564583  613002 system_pods.go:89] "kube-scheduler-no-preload-832842" [36446c37-e6da-44fe-93aa-b30ba79a4db9] Running
	I1217 19:59:51.564587  613002 system_pods.go:89] "storage-provisioner" [d36df8ec-ccab-401f-9ab8-c2a6c8f1e5ed] Running
	I1217 19:59:51.564595  613002 system_pods.go:126] duration metric: took 638.736718ms to wait for k8s-apps to be running ...
	I1217 19:59:51.564602  613002 system_svc.go:44] waiting for kubelet service to be running ....
	I1217 19:59:51.564649  613002 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 19:59:51.578626  613002 system_svc.go:56] duration metric: took 14.012455ms WaitForService to wait for kubelet
	I1217 19:59:51.578656  613002 kubeadm.go:587] duration metric: took 14.01873722s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1217 19:59:51.578679  613002 node_conditions.go:102] verifying NodePressure condition ...
	I1217 19:59:51.581764  613002 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1217 19:59:51.581791  613002 node_conditions.go:123] node cpu capacity is 8
	I1217 19:59:51.581806  613002 node_conditions.go:105] duration metric: took 3.122744ms to run NodePressure ...
	I1217 19:59:51.581819  613002 start.go:242] waiting for startup goroutines ...
	I1217 19:59:51.581826  613002 start.go:247] waiting for cluster config update ...
	I1217 19:59:51.581836  613002 start.go:256] writing updated cluster config ...
	I1217 19:59:51.582177  613002 ssh_runner.go:195] Run: rm -f paused
	I1217 19:59:51.586289  613002 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1217 19:59:51.589582  613002 pod_ready.go:83] waiting for pod "coredns-7d764666f9-988jw" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 19:59:51.593968  613002 pod_ready.go:94] pod "coredns-7d764666f9-988jw" is "Ready"
	I1217 19:59:51.593996  613002 pod_ready.go:86] duration metric: took 4.395205ms for pod "coredns-7d764666f9-988jw" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 19:59:51.595975  613002 pod_ready.go:83] waiting for pod "etcd-no-preload-832842" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 19:59:51.599696  613002 pod_ready.go:94] pod "etcd-no-preload-832842" is "Ready"
	I1217 19:59:51.599716  613002 pod_ready.go:86] duration metric: took 3.718479ms for pod "etcd-no-preload-832842" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 19:59:51.601616  613002 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-832842" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 19:59:51.605249  613002 pod_ready.go:94] pod "kube-apiserver-no-preload-832842" is "Ready"
	I1217 19:59:51.605278  613002 pod_ready.go:86] duration metric: took 3.640206ms for pod "kube-apiserver-no-preload-832842" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 19:59:51.607229  613002 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-832842" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 19:59:51.990405  613002 pod_ready.go:94] pod "kube-controller-manager-no-preload-832842" is "Ready"
	I1217 19:59:51.990437  613002 pod_ready.go:86] duration metric: took 383.184181ms for pod "kube-controller-manager-no-preload-832842" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 19:59:49.789056  596882 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1217 19:59:49.789585  596882 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 19:59:49.789657  596882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 19:59:49.789736  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 19:59:49.819917  596882 cri.go:89] found id: "1ca89ebbb5613d16c13191bb7866cf9662b334b933e82c6860753473e8e2060b"
	I1217 19:59:49.819962  596882 cri.go:89] found id: ""
	I1217 19:59:49.819976  596882 logs.go:282] 1 containers: [1ca89ebbb5613d16c13191bb7866cf9662b334b933e82c6860753473e8e2060b]
	I1217 19:59:49.820049  596882 ssh_runner.go:195] Run: which crictl
	I1217 19:59:49.824770  596882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 19:59:49.824849  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 19:59:49.853759  596882 cri.go:89] found id: ""
	I1217 19:59:49.853788  596882 logs.go:282] 0 containers: []
	W1217 19:59:49.853797  596882 logs.go:284] No container was found matching "etcd"
	I1217 19:59:49.853803  596882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 19:59:49.853865  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 19:59:49.882284  596882 cri.go:89] found id: ""
	I1217 19:59:49.882314  596882 logs.go:282] 0 containers: []
	W1217 19:59:49.882326  596882 logs.go:284] No container was found matching "coredns"
	I1217 19:59:49.882334  596882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 19:59:49.882399  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 19:59:49.911284  596882 cri.go:89] found id: "26afbca819064c614a7c269e4fbe3f73beb12920c9989c7a9adca8a87b8aee29"
	I1217 19:59:49.911316  596882 cri.go:89] found id: ""
	I1217 19:59:49.911331  596882 logs.go:282] 1 containers: [26afbca819064c614a7c269e4fbe3f73beb12920c9989c7a9adca8a87b8aee29]
	I1217 19:59:49.911392  596882 ssh_runner.go:195] Run: which crictl
	I1217 19:59:49.915472  596882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 19:59:49.915537  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 19:59:49.943673  596882 cri.go:89] found id: ""
	I1217 19:59:49.943699  596882 logs.go:282] 0 containers: []
	W1217 19:59:49.943707  596882 logs.go:284] No container was found matching "kube-proxy"
	I1217 19:59:49.943713  596882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 19:59:49.943770  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 19:59:49.972225  596882 cri.go:89] found id: "1dee5fecff78a1a61126f20ed261adbf0b690830e4ecf50ef50f99d3aaad09cb"
	I1217 19:59:49.972250  596882 cri.go:89] found id: "96d62cc516271a9229ae697d73c68f44ce2135124f2d88371c0189bb8de307fb"
	I1217 19:59:49.972254  596882 cri.go:89] found id: ""
	I1217 19:59:49.972264  596882 logs.go:282] 2 containers: [1dee5fecff78a1a61126f20ed261adbf0b690830e4ecf50ef50f99d3aaad09cb 96d62cc516271a9229ae697d73c68f44ce2135124f2d88371c0189bb8de307fb]
	I1217 19:59:49.972327  596882 ssh_runner.go:195] Run: which crictl
	I1217 19:59:49.976630  596882 ssh_runner.go:195] Run: which crictl
	I1217 19:59:49.980609  596882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 19:59:49.980686  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 19:59:50.008940  596882 cri.go:89] found id: ""
	I1217 19:59:50.008986  596882 logs.go:282] 0 containers: []
	W1217 19:59:50.008999  596882 logs.go:284] No container was found matching "kindnet"
	I1217 19:59:50.009007  596882 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1217 19:59:50.009072  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 19:59:50.036325  596882 cri.go:89] found id: ""
	I1217 19:59:50.036349  596882 logs.go:282] 0 containers: []
	W1217 19:59:50.036357  596882 logs.go:284] No container was found matching "storage-provisioner"
	I1217 19:59:50.036374  596882 logs.go:123] Gathering logs for kubelet ...
	I1217 19:59:50.036386  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 19:59:50.111248  596882 logs.go:123] Gathering logs for dmesg ...
	I1217 19:59:50.111289  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 19:59:50.129160  596882 logs.go:123] Gathering logs for kube-scheduler [26afbca819064c614a7c269e4fbe3f73beb12920c9989c7a9adca8a87b8aee29] ...
	I1217 19:59:50.129190  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 26afbca819064c614a7c269e4fbe3f73beb12920c9989c7a9adca8a87b8aee29"
	I1217 19:59:50.157595  596882 logs.go:123] Gathering logs for kube-controller-manager [96d62cc516271a9229ae697d73c68f44ce2135124f2d88371c0189bb8de307fb] ...
	I1217 19:59:50.157623  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 96d62cc516271a9229ae697d73c68f44ce2135124f2d88371c0189bb8de307fb"
	W1217 19:59:50.186611  596882 logs.go:130] failed kube-controller-manager [96d62cc516271a9229ae697d73c68f44ce2135124f2d88371c0189bb8de307fb]: command: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 96d62cc516271a9229ae697d73c68f44ce2135124f2d88371c0189bb8de307fb" /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 96d62cc516271a9229ae697d73c68f44ce2135124f2d88371c0189bb8de307fb": Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T19:59:50Z" level=fatal msg="failed to try resolving symlinks in path \"/var/log/pods/kube-system_kube-controller-manager-kubernetes-upgrade-322567_4c29c87512afff0c3b350e3ae103d245/kube-controller-manager/1.log\": lstat /var/log/pods/kube-system_kube-controller-manager-kubernetes-upgrade-322567_4c29c87512afff0c3b350e3ae103d245/kube-controller-manager/1.log: no such file or directory"
	 output: 
	** stderr ** 
	time="2025-12-17T19:59:50Z" level=fatal msg="failed to try resolving symlinks in path \"/var/log/pods/kube-system_kube-controller-manager-kubernetes-upgrade-322567_4c29c87512afff0c3b350e3ae103d245/kube-controller-manager/1.log\": lstat /var/log/pods/kube-system_kube-controller-manager-kubernetes-upgrade-322567_4c29c87512afff0c3b350e3ae103d245/kube-controller-manager/1.log: no such file or directory"
	
	** /stderr **
	I1217 19:59:50.186639  596882 logs.go:123] Gathering logs for describe nodes ...
	I1217 19:59:50.186659  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 19:59:50.248285  596882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 19:59:50.248312  596882 logs.go:123] Gathering logs for kube-apiserver [1ca89ebbb5613d16c13191bb7866cf9662b334b933e82c6860753473e8e2060b] ...
	I1217 19:59:50.248328  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1ca89ebbb5613d16c13191bb7866cf9662b334b933e82c6860753473e8e2060b"
	I1217 19:59:50.281745  596882 logs.go:123] Gathering logs for kube-controller-manager [1dee5fecff78a1a61126f20ed261adbf0b690830e4ecf50ef50f99d3aaad09cb] ...
	I1217 19:59:50.281779  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1dee5fecff78a1a61126f20ed261adbf0b690830e4ecf50ef50f99d3aaad09cb"
	I1217 19:59:50.311990  596882 logs.go:123] Gathering logs for CRI-O ...
	I1217 19:59:50.312023  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 19:59:50.356736  596882 logs.go:123] Gathering logs for container status ...
	I1217 19:59:50.356774  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 19:59:52.191067  613002 pod_ready.go:83] waiting for pod "kube-proxy-jc5dd" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 19:59:52.590415  613002 pod_ready.go:94] pod "kube-proxy-jc5dd" is "Ready"
	I1217 19:59:52.590445  613002 pod_ready.go:86] duration metric: took 399.322512ms for pod "kube-proxy-jc5dd" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 19:59:52.790680  613002 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-832842" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 19:59:53.190303  613002 pod_ready.go:94] pod "kube-scheduler-no-preload-832842" is "Ready"
	I1217 19:59:53.190337  613002 pod_ready.go:86] duration metric: took 399.629652ms for pod "kube-scheduler-no-preload-832842" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 19:59:53.190354  613002 pod_ready.go:40] duration metric: took 1.604032629s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1217 19:59:53.236152  613002 start.go:625] kubectl: 1.35.0, cluster: 1.35.0-rc.1 (minor skew: 0)
	I1217 19:59:53.237766  613002 out.go:179] * Done! kubectl is now configured to use "no-preload-832842" cluster and "default" namespace by default
	W1217 19:59:52.297727  612025 node_ready.go:57] node "old-k8s-version-894575" has "Ready":"False" status (will retry)
	I1217 19:59:54.797471  612025 node_ready.go:49] node "old-k8s-version-894575" is "Ready"
	I1217 19:59:54.797501  612025 node_ready.go:38] duration metric: took 12.002996697s for node "old-k8s-version-894575" to be "Ready" ...
	I1217 19:59:54.797528  612025 api_server.go:52] waiting for apiserver process to appear ...
	I1217 19:59:54.797586  612025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 19:59:54.809929  612025 api_server.go:72] duration metric: took 12.337857281s to wait for apiserver process to appear ...
	I1217 19:59:54.809963  612025 api_server.go:88] waiting for apiserver healthz status ...
	I1217 19:59:54.809984  612025 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1217 19:59:54.815168  612025 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1217 19:59:54.816387  612025 api_server.go:141] control plane version: v1.28.0
	I1217 19:59:54.816414  612025 api_server.go:131] duration metric: took 6.443801ms to wait for apiserver health ...
	I1217 19:59:54.816423  612025 system_pods.go:43] waiting for kube-system pods to appear ...
	I1217 19:59:54.819975  612025 system_pods.go:59] 8 kube-system pods found
	I1217 19:59:54.820018  612025 system_pods.go:61] "coredns-5dd5756b68-gbhs5" [d30f3f85-9002-4cf4-b827-6bb0dfd90bd4] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 19:59:54.820032  612025 system_pods.go:61] "etcd-old-k8s-version-894575" [db81d873-90e0-4564-965c-d65b708e7621] Running
	I1217 19:59:54.820038  612025 system_pods.go:61] "kindnet-p8d9f" [73923d5d-ed13-4b01-ad91-71ed716cbd2b] Running
	I1217 19:59:54.820042  612025 system_pods.go:61] "kube-apiserver-old-k8s-version-894575" [47a706f5-62dc-49b3-ba75-772c7c3c0564] Running
	I1217 19:59:54.820045  612025 system_pods.go:61] "kube-controller-manager-old-k8s-version-894575" [d3e24b0c-4542-4d95-93f2-b45d48cd0775] Running
	I1217 19:59:54.820049  612025 system_pods.go:61] "kube-proxy-bdzb6" [6c886a0f-40d4-4f9a-a23e-e3d966a937cd] Running
	I1217 19:59:54.820052  612025 system_pods.go:61] "kube-scheduler-old-k8s-version-894575" [96ff17e5-035d-46ff-aea1-8c356a117abb] Running
	I1217 19:59:54.820058  612025 system_pods.go:61] "storage-provisioner" [0e722d4c-f50c-4835-b78b-bd7a203e9014] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1217 19:59:54.820068  612025 system_pods.go:74] duration metric: took 3.638852ms to wait for pod list to return data ...
	I1217 19:59:54.820098  612025 default_sa.go:34] waiting for default service account to be created ...
	I1217 19:59:54.822576  612025 default_sa.go:45] found service account: "default"
	I1217 19:59:54.822600  612025 default_sa.go:55] duration metric: took 2.491238ms for default service account to be created ...
	I1217 19:59:54.822610  612025 system_pods.go:116] waiting for k8s-apps to be running ...
	I1217 19:59:54.826610  612025 system_pods.go:86] 8 kube-system pods found
	I1217 19:59:54.826648  612025 system_pods.go:89] "coredns-5dd5756b68-gbhs5" [d30f3f85-9002-4cf4-b827-6bb0dfd90bd4] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 19:59:54.826657  612025 system_pods.go:89] "etcd-old-k8s-version-894575" [db81d873-90e0-4564-965c-d65b708e7621] Running
	I1217 19:59:54.826668  612025 system_pods.go:89] "kindnet-p8d9f" [73923d5d-ed13-4b01-ad91-71ed716cbd2b] Running
	I1217 19:59:54.826679  612025 system_pods.go:89] "kube-apiserver-old-k8s-version-894575" [47a706f5-62dc-49b3-ba75-772c7c3c0564] Running
	I1217 19:59:54.826687  612025 system_pods.go:89] "kube-controller-manager-old-k8s-version-894575" [d3e24b0c-4542-4d95-93f2-b45d48cd0775] Running
	I1217 19:59:54.826698  612025 system_pods.go:89] "kube-proxy-bdzb6" [6c886a0f-40d4-4f9a-a23e-e3d966a937cd] Running
	I1217 19:59:54.826704  612025 system_pods.go:89] "kube-scheduler-old-k8s-version-894575" [96ff17e5-035d-46ff-aea1-8c356a117abb] Running
	I1217 19:59:54.826718  612025 system_pods.go:89] "storage-provisioner" [0e722d4c-f50c-4835-b78b-bd7a203e9014] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1217 19:59:54.826766  612025 retry.go:31] will retry after 257.442605ms: missing components: kube-dns
	I1217 19:59:52.890123  596882 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1217 19:59:55.090303  612025 system_pods.go:86] 8 kube-system pods found
	I1217 19:59:55.090342  612025 system_pods.go:89] "coredns-5dd5756b68-gbhs5" [d30f3f85-9002-4cf4-b827-6bb0dfd90bd4] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 19:59:55.090351  612025 system_pods.go:89] "etcd-old-k8s-version-894575" [db81d873-90e0-4564-965c-d65b708e7621] Running
	I1217 19:59:55.090361  612025 system_pods.go:89] "kindnet-p8d9f" [73923d5d-ed13-4b01-ad91-71ed716cbd2b] Running
	I1217 19:59:55.090366  612025 system_pods.go:89] "kube-apiserver-old-k8s-version-894575" [47a706f5-62dc-49b3-ba75-772c7c3c0564] Running
	I1217 19:59:55.090372  612025 system_pods.go:89] "kube-controller-manager-old-k8s-version-894575" [d3e24b0c-4542-4d95-93f2-b45d48cd0775] Running
	I1217 19:59:55.090377  612025 system_pods.go:89] "kube-proxy-bdzb6" [6c886a0f-40d4-4f9a-a23e-e3d966a937cd] Running
	I1217 19:59:55.090382  612025 system_pods.go:89] "kube-scheduler-old-k8s-version-894575" [96ff17e5-035d-46ff-aea1-8c356a117abb] Running
	I1217 19:59:55.090398  612025 system_pods.go:89] "storage-provisioner" [0e722d4c-f50c-4835-b78b-bd7a203e9014] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1217 19:59:55.090427  612025 retry.go:31] will retry after 245.454795ms: missing components: kube-dns
	I1217 19:59:55.340574  612025 system_pods.go:86] 8 kube-system pods found
	I1217 19:59:55.340614  612025 system_pods.go:89] "coredns-5dd5756b68-gbhs5" [d30f3f85-9002-4cf4-b827-6bb0dfd90bd4] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 19:59:55.340621  612025 system_pods.go:89] "etcd-old-k8s-version-894575" [db81d873-90e0-4564-965c-d65b708e7621] Running
	I1217 19:59:55.340629  612025 system_pods.go:89] "kindnet-p8d9f" [73923d5d-ed13-4b01-ad91-71ed716cbd2b] Running
	I1217 19:59:55.340632  612025 system_pods.go:89] "kube-apiserver-old-k8s-version-894575" [47a706f5-62dc-49b3-ba75-772c7c3c0564] Running
	I1217 19:59:55.340639  612025 system_pods.go:89] "kube-controller-manager-old-k8s-version-894575" [d3e24b0c-4542-4d95-93f2-b45d48cd0775] Running
	I1217 19:59:55.340643  612025 system_pods.go:89] "kube-proxy-bdzb6" [6c886a0f-40d4-4f9a-a23e-e3d966a937cd] Running
	I1217 19:59:55.340646  612025 system_pods.go:89] "kube-scheduler-old-k8s-version-894575" [96ff17e5-035d-46ff-aea1-8c356a117abb] Running
	I1217 19:59:55.340650  612025 system_pods.go:89] "storage-provisioner" [0e722d4c-f50c-4835-b78b-bd7a203e9014] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1217 19:59:55.340665  612025 retry.go:31] will retry after 386.121813ms: missing components: kube-dns
	I1217 19:59:55.730667  612025 system_pods.go:86] 8 kube-system pods found
	I1217 19:59:55.730703  612025 system_pods.go:89] "coredns-5dd5756b68-gbhs5" [d30f3f85-9002-4cf4-b827-6bb0dfd90bd4] Running
	I1217 19:59:55.730709  612025 system_pods.go:89] "etcd-old-k8s-version-894575" [db81d873-90e0-4564-965c-d65b708e7621] Running
	I1217 19:59:55.730713  612025 system_pods.go:89] "kindnet-p8d9f" [73923d5d-ed13-4b01-ad91-71ed716cbd2b] Running
	I1217 19:59:55.730716  612025 system_pods.go:89] "kube-apiserver-old-k8s-version-894575" [47a706f5-62dc-49b3-ba75-772c7c3c0564] Running
	I1217 19:59:55.730720  612025 system_pods.go:89] "kube-controller-manager-old-k8s-version-894575" [d3e24b0c-4542-4d95-93f2-b45d48cd0775] Running
	I1217 19:59:55.730723  612025 system_pods.go:89] "kube-proxy-bdzb6" [6c886a0f-40d4-4f9a-a23e-e3d966a937cd] Running
	I1217 19:59:55.730727  612025 system_pods.go:89] "kube-scheduler-old-k8s-version-894575" [96ff17e5-035d-46ff-aea1-8c356a117abb] Running
	I1217 19:59:55.730729  612025 system_pods.go:89] "storage-provisioner" [0e722d4c-f50c-4835-b78b-bd7a203e9014] Running
	I1217 19:59:55.730737  612025 system_pods.go:126] duration metric: took 908.122007ms to wait for k8s-apps to be running ...
	I1217 19:59:55.730755  612025 system_svc.go:44] waiting for kubelet service to be running ....
	I1217 19:59:55.730805  612025 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 19:59:55.743927  612025 system_svc.go:56] duration metric: took 13.156324ms WaitForService to wait for kubelet
	I1217 19:59:55.743958  612025 kubeadm.go:587] duration metric: took 13.271894959s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1217 19:59:55.743981  612025 node_conditions.go:102] verifying NodePressure condition ...
	I1217 19:59:55.747182  612025 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1217 19:59:55.747242  612025 node_conditions.go:123] node cpu capacity is 8
	I1217 19:59:55.747267  612025 node_conditions.go:105] duration metric: took 3.279192ms to run NodePressure ...
	I1217 19:59:55.747282  612025 start.go:242] waiting for startup goroutines ...
	I1217 19:59:55.747292  612025 start.go:247] waiting for cluster config update ...
	I1217 19:59:55.747306  612025 start.go:256] writing updated cluster config ...
	I1217 19:59:55.747638  612025 ssh_runner.go:195] Run: rm -f paused
	I1217 19:59:55.752094  612025 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1217 19:59:55.755926  612025 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-gbhs5" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 19:59:55.759989  612025 pod_ready.go:94] pod "coredns-5dd5756b68-gbhs5" is "Ready"
	I1217 19:59:55.760009  612025 pod_ready.go:86] duration metric: took 4.059125ms for pod "coredns-5dd5756b68-gbhs5" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 19:59:55.762497  612025 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-894575" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 19:59:55.766155  612025 pod_ready.go:94] pod "etcd-old-k8s-version-894575" is "Ready"
	I1217 19:59:55.766173  612025 pod_ready.go:86] duration metric: took 3.656843ms for pod "etcd-old-k8s-version-894575" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 19:59:55.768581  612025 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-894575" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 19:59:55.772199  612025 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-894575" is "Ready"
	I1217 19:59:55.772220  612025 pod_ready.go:86] duration metric: took 3.619593ms for pod "kube-apiserver-old-k8s-version-894575" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 19:59:55.774707  612025 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-894575" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 19:59:56.155733  612025 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-894575" is "Ready"
	I1217 19:59:56.155766  612025 pod_ready.go:86] duration metric: took 381.041505ms for pod "kube-controller-manager-old-k8s-version-894575" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 19:59:56.356678  612025 pod_ready.go:83] waiting for pod "kube-proxy-bdzb6" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 19:59:56.755891  612025 pod_ready.go:94] pod "kube-proxy-bdzb6" is "Ready"
	I1217 19:59:56.755920  612025 pod_ready.go:86] duration metric: took 399.214523ms for pod "kube-proxy-bdzb6" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 19:59:56.956567  612025 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-894575" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 19:59:57.356662  612025 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-894575" is "Ready"
	I1217 19:59:57.356691  612025 pod_ready.go:86] duration metric: took 400.097199ms for pod "kube-scheduler-old-k8s-version-894575" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 19:59:57.356705  612025 pod_ready.go:40] duration metric: took 1.604578881s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1217 19:59:57.405563  612025 start.go:625] kubectl: 1.35.0, cluster: 1.28.0 (minor skew: 7)
	I1217 19:59:57.407226  612025 out.go:203] 
	W1217 19:59:57.408483  612025 out.go:285] ! /usr/local/bin/kubectl is version 1.35.0, which may have incompatibilities with Kubernetes 1.28.0.
	I1217 19:59:57.409590  612025 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1217 19:59:57.410881  612025 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-894575" cluster and "default" namespace by default
	I1217 19:59:57.891275  596882 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1217 19:59:57.891343  596882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 19:59:57.891398  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 19:59:57.920327  596882 cri.go:89] found id: "6822d1aff73905867cd00c8bd3d996a8d98a37c238f53bab351d576f0d6b34fc"
	I1217 19:59:57.920358  596882 cri.go:89] found id: "1ca89ebbb5613d16c13191bb7866cf9662b334b933e82c6860753473e8e2060b"
	I1217 19:59:57.920364  596882 cri.go:89] found id: ""
	I1217 19:59:57.920376  596882 logs.go:282] 2 containers: [6822d1aff73905867cd00c8bd3d996a8d98a37c238f53bab351d576f0d6b34fc 1ca89ebbb5613d16c13191bb7866cf9662b334b933e82c6860753473e8e2060b]
	I1217 19:59:57.920433  596882 ssh_runner.go:195] Run: which crictl
	I1217 19:59:57.924597  596882 ssh_runner.go:195] Run: which crictl
	I1217 19:59:57.928344  596882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 19:59:57.928422  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 19:59:57.958281  596882 cri.go:89] found id: ""
	I1217 19:59:57.958306  596882 logs.go:282] 0 containers: []
	W1217 19:59:57.958315  596882 logs.go:284] No container was found matching "etcd"
	I1217 19:59:57.958320  596882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 19:59:57.958370  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 19:59:57.986230  596882 cri.go:89] found id: ""
	I1217 19:59:57.986257  596882 logs.go:282] 0 containers: []
	W1217 19:59:57.986266  596882 logs.go:284] No container was found matching "coredns"
	I1217 19:59:57.986272  596882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 19:59:57.986356  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 19:59:58.014902  596882 cri.go:89] found id: "26afbca819064c614a7c269e4fbe3f73beb12920c9989c7a9adca8a87b8aee29"
	I1217 19:59:58.014931  596882 cri.go:89] found id: ""
	I1217 19:59:58.014943  596882 logs.go:282] 1 containers: [26afbca819064c614a7c269e4fbe3f73beb12920c9989c7a9adca8a87b8aee29]
	I1217 19:59:58.014996  596882 ssh_runner.go:195] Run: which crictl
	I1217 19:59:58.019404  596882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 19:59:58.019483  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 19:59:58.046749  596882 cri.go:89] found id: ""
	I1217 19:59:58.046772  596882 logs.go:282] 0 containers: []
	W1217 19:59:58.046781  596882 logs.go:284] No container was found matching "kube-proxy"
	I1217 19:59:58.046788  596882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 19:59:58.046850  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 19:59:58.074828  596882 cri.go:89] found id: "1dee5fecff78a1a61126f20ed261adbf0b690830e4ecf50ef50f99d3aaad09cb"
	I1217 19:59:58.074850  596882 cri.go:89] found id: ""
	I1217 19:59:58.074858  596882 logs.go:282] 1 containers: [1dee5fecff78a1a61126f20ed261adbf0b690830e4ecf50ef50f99d3aaad09cb]
	I1217 19:59:58.074927  596882 ssh_runner.go:195] Run: which crictl
	I1217 19:59:58.078847  596882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 19:59:58.078910  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 19:59:58.107098  596882 cri.go:89] found id: ""
	I1217 19:59:58.107126  596882 logs.go:282] 0 containers: []
	W1217 19:59:58.107135  596882 logs.go:284] No container was found matching "kindnet"
	I1217 19:59:58.107142  596882 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1217 19:59:58.107217  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 19:59:58.136602  596882 cri.go:89] found id: ""
	I1217 19:59:58.136626  596882 logs.go:282] 0 containers: []
	W1217 19:59:58.136635  596882 logs.go:284] No container was found matching "storage-provisioner"
	I1217 19:59:58.136650  596882 logs.go:123] Gathering logs for kube-apiserver [1ca89ebbb5613d16c13191bb7866cf9662b334b933e82c6860753473e8e2060b] ...
	I1217 19:59:58.136662  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1ca89ebbb5613d16c13191bb7866cf9662b334b933e82c6860753473e8e2060b"
	I1217 19:59:58.168742  596882 logs.go:123] Gathering logs for kube-scheduler [26afbca819064c614a7c269e4fbe3f73beb12920c9989c7a9adca8a87b8aee29] ...
	I1217 19:59:58.168777  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 26afbca819064c614a7c269e4fbe3f73beb12920c9989c7a9adca8a87b8aee29"
	I1217 19:59:58.198804  596882 logs.go:123] Gathering logs for kube-controller-manager [1dee5fecff78a1a61126f20ed261adbf0b690830e4ecf50ef50f99d3aaad09cb] ...
	I1217 19:59:58.198832  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1dee5fecff78a1a61126f20ed261adbf0b690830e4ecf50ef50f99d3aaad09cb"
	I1217 19:59:58.227814  596882 logs.go:123] Gathering logs for kubelet ...
	I1217 19:59:58.227847  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 19:59:58.302151  596882 logs.go:123] Gathering logs for dmesg ...
	I1217 19:59:58.302192  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 19:59:58.319927  596882 logs.go:123] Gathering logs for describe nodes ...
	I1217 19:59:58.319975  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	
	
	==> CRI-O <==
	Dec 17 19:59:50 no-preload-832842 crio[775]: time="2025-12-17T19:59:50.972412525Z" level=info msg="Starting container: ad63430c5a21ca247c1c1bd4f4b49973ba7f425938e1bf4dad20e07cd9cc4372" id=ea4ead1d-00c2-4414-94e3-b4b81241e001 name=/runtime.v1.RuntimeService/StartContainer
	Dec 17 19:59:50 no-preload-832842 crio[775]: time="2025-12-17T19:59:50.974457526Z" level=info msg="Started container" PID=2798 containerID=ad63430c5a21ca247c1c1bd4f4b49973ba7f425938e1bf4dad20e07cd9cc4372 description=kube-system/coredns-7d764666f9-988jw/coredns id=ea4ead1d-00c2-4414-94e3-b4b81241e001 name=/runtime.v1.RuntimeService/StartContainer sandboxID=3c41a5f4709c44bc8cee09af55735d20371c1e2c00a9fdf5b31daf9ee2b49b0a
	Dec 17 19:59:53 no-preload-832842 crio[775]: time="2025-12-17T19:59:53.694489442Z" level=info msg="Running pod sandbox: default/busybox/POD" id=c26210b2-a168-491a-9228-21b80e200d0d name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 17 19:59:53 no-preload-832842 crio[775]: time="2025-12-17T19:59:53.694562692Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 19:59:53 no-preload-832842 crio[775]: time="2025-12-17T19:59:53.700130098Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:bcc253ab23819c23d79078cd8ff247b4d14f1fde0d27c2e540fbc7eeeb4f645d UID:71149176-ff99-466f-92c1-b41eec28d488 NetNS:/var/run/netns/de1c9bb6-af9f-42c9-a370-e01f1e9a46e6 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc0009dae80}] Aliases:map[]}"
	Dec 17 19:59:53 no-preload-832842 crio[775]: time="2025-12-17T19:59:53.700173189Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Dec 17 19:59:53 no-preload-832842 crio[775]: time="2025-12-17T19:59:53.710669788Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:bcc253ab23819c23d79078cd8ff247b4d14f1fde0d27c2e540fbc7eeeb4f645d UID:71149176-ff99-466f-92c1-b41eec28d488 NetNS:/var/run/netns/de1c9bb6-af9f-42c9-a370-e01f1e9a46e6 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc0009dae80}] Aliases:map[]}"
	Dec 17 19:59:53 no-preload-832842 crio[775]: time="2025-12-17T19:59:53.71080952Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Dec 17 19:59:53 no-preload-832842 crio[775]: time="2025-12-17T19:59:53.711607435Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 17 19:59:53 no-preload-832842 crio[775]: time="2025-12-17T19:59:53.712480554Z" level=info msg="Ran pod sandbox bcc253ab23819c23d79078cd8ff247b4d14f1fde0d27c2e540fbc7eeeb4f645d with infra container: default/busybox/POD" id=c26210b2-a168-491a-9228-21b80e200d0d name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 17 19:59:53 no-preload-832842 crio[775]: time="2025-12-17T19:59:53.713709461Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=4ff40702-0023-49f1-a4c7-d1f39bd792bc name=/runtime.v1.ImageService/ImageStatus
	Dec 17 19:59:53 no-preload-832842 crio[775]: time="2025-12-17T19:59:53.713822597Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=4ff40702-0023-49f1-a4c7-d1f39bd792bc name=/runtime.v1.ImageService/ImageStatus
	Dec 17 19:59:53 no-preload-832842 crio[775]: time="2025-12-17T19:59:53.713856451Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=4ff40702-0023-49f1-a4c7-d1f39bd792bc name=/runtime.v1.ImageService/ImageStatus
	Dec 17 19:59:53 no-preload-832842 crio[775]: time="2025-12-17T19:59:53.71458179Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=c1b37726-4e54-48e0-9d0a-a43b2e733b87 name=/runtime.v1.ImageService/PullImage
	Dec 17 19:59:53 no-preload-832842 crio[775]: time="2025-12-17T19:59:53.715932865Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Dec 17 19:59:55 no-preload-832842 crio[775]: time="2025-12-17T19:59:55.011417035Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=c1b37726-4e54-48e0-9d0a-a43b2e733b87 name=/runtime.v1.ImageService/PullImage
	Dec 17 19:59:55 no-preload-832842 crio[775]: time="2025-12-17T19:59:55.011955047Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=30b32af0-ce3d-4e96-98a5-419526e72320 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 19:59:55 no-preload-832842 crio[775]: time="2025-12-17T19:59:55.013452336Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=ff2d1b54-29ae-45aa-972f-4f29c4d0823b name=/runtime.v1.ImageService/ImageStatus
	Dec 17 19:59:55 no-preload-832842 crio[775]: time="2025-12-17T19:59:55.016996695Z" level=info msg="Creating container: default/busybox/busybox" id=35eebcf4-3534-4757-95ac-253671166666 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 17 19:59:55 no-preload-832842 crio[775]: time="2025-12-17T19:59:55.017112196Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 19:59:55 no-preload-832842 crio[775]: time="2025-12-17T19:59:55.020865661Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 19:59:55 no-preload-832842 crio[775]: time="2025-12-17T19:59:55.021313676Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 19:59:55 no-preload-832842 crio[775]: time="2025-12-17T19:59:55.050980733Z" level=info msg="Created container a87cd239252d944c4f126a63d91d9302b385d809fd711b6affbb593c2e61b7bc: default/busybox/busybox" id=35eebcf4-3534-4757-95ac-253671166666 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 17 19:59:55 no-preload-832842 crio[775]: time="2025-12-17T19:59:55.051635668Z" level=info msg="Starting container: a87cd239252d944c4f126a63d91d9302b385d809fd711b6affbb593c2e61b7bc" id=60f4347b-04ee-4371-8e8f-25f1f6edbd5e name=/runtime.v1.RuntimeService/StartContainer
	Dec 17 19:59:55 no-preload-832842 crio[775]: time="2025-12-17T19:59:55.053815295Z" level=info msg="Started container" PID=2873 containerID=a87cd239252d944c4f126a63d91d9302b385d809fd711b6affbb593c2e61b7bc description=default/busybox/busybox id=60f4347b-04ee-4371-8e8f-25f1f6edbd5e name=/runtime.v1.RuntimeService/StartContainer sandboxID=bcc253ab23819c23d79078cd8ff247b4d14f1fde0d27c2e540fbc7eeeb4f645d
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	a87cd239252d9       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   7 seconds ago       Running             busybox                   0                   bcc253ab23819       busybox                                     default
	ad63430c5a21c       aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139                                      11 seconds ago      Running             coredns                   0                   3c41a5f4709c4       coredns-7d764666f9-988jw                    kube-system
	197e41df4fc34       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      11 seconds ago      Running             storage-provisioner       0                   1308e7d1a09c2       storage-provisioner                         kube-system
	b48b0ab37ba27       docker.io/kindest/kindnetd@sha256:7c22558dc06a570d46ea6e8a73b23cdc754eb81f7c08d3441a3171ad359ffc27    23 seconds ago      Running             kindnet-cni               0                   b3bc89dcf0c51       kindnet-t5x5v                               kube-system
	0a20d89093576       af0321f3a4f388cfb978464739c323ebf891a7b0b50cdfd7179e92f141dad42a                                      24 seconds ago      Running             kube-proxy                0                   251f1b05de7c5       kube-proxy-jc5dd                            kube-system
	3ca2fc3220d24       5032a56602e1b9bd8856699701b6148aa1b9901d05b61f893df3b57f84aca614                                      34 seconds ago      Running             kube-controller-manager   0                   fe0c2eab7fd3c       kube-controller-manager-no-preload-832842   kube-system
	7c7e2759db6a0       73f80cdc073daa4d501207f9e6dec1fa9eea5f27e8d347b8a0c4bad8811eecdc                                      34 seconds ago      Running             kube-scheduler            0                   41fc4e8bdbc4a       kube-scheduler-no-preload-832842            kube-system
	b57be1cdaa83b       58865405a13bccac1d74bc3f446dddd22e6ef0d7ee8b52363c86dd31838976ce                                      34 seconds ago      Running             kube-apiserver            0                   e911038d51c90       kube-apiserver-no-preload-832842            kube-system
	7ebf30f19d47c       0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2                                      34 seconds ago      Running             etcd                      0                   c4200b6117078       etcd-no-preload-832842                      kube-system
	
	
	==> coredns [ad63430c5a21ca247c1c1bd4f4b49973ba7f425938e1bf4dad20e07cd9cc4372] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 66f0a748f44f6317a6b122af3f457c9dd0ecaed8718ffbf95a69434523efd9ec4992e71f54c7edd5753646fe9af89ac2138b9c3ce14d4a0ba9d2372a55f120bb
	CoreDNS-1.13.1
	linux/amd64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:37553 - 12419 "HINFO IN 3445781668196100264.91275461358376766. udp 55 false 512" NXDOMAIN qr,rd,ra 130 0.022225775s
	
	
	==> describe nodes <==
	Name:               no-preload-832842
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-832842
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2e96f676eb7e96389e85fe0658a4ede4c4ba6924
	                    minikube.k8s.io/name=no-preload-832842
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_17T19_59_33_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Dec 2025 19:59:29 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-832842
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Dec 2025 19:59:52 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Dec 2025 19:59:50 +0000   Wed, 17 Dec 2025 19:59:28 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Dec 2025 19:59:50 +0000   Wed, 17 Dec 2025 19:59:28 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Dec 2025 19:59:50 +0000   Wed, 17 Dec 2025 19:59:28 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Dec 2025 19:59:50 +0000   Wed, 17 Dec 2025 19:59:50 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    no-preload-832842
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 99cc213c06a11cdf07b2a4d26942818a
	  System UUID:                e81b3478-a278-4914-8840-ea9b4f5123a7
	  Boot ID:                    832664c8-407a-4bff-a432-3bbc3f20421e
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.35.0-rc.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         9s
	  kube-system                 coredns-7d764666f9-988jw                     100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     25s
	  kube-system                 etcd-no-preload-832842                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         30s
	  kube-system                 kindnet-t5x5v                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      25s
	  kube-system                 kube-apiserver-no-preload-832842             250m (3%)     0 (0%)      0 (0%)           0 (0%)         32s
	  kube-system                 kube-controller-manager-no-preload-832842    200m (2%)     0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 kube-proxy-jc5dd                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         25s
	  kube-system                 kube-scheduler-no-preload-832842             100m (1%)     0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         24s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  26s   node-controller  Node no-preload-832842 event: Registered Node no-preload-832842 in Controller
	
	
	==> dmesg <==
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 02 bf cf fd 8a f3 08 06
	[  +0.000372] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 46 d7 50 f9 50 96 08 06
	[Dec17 19:26] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000011] ll header: 00000000: 12 b8 6e 1b fb 93 de a2 46 23 bd 1e 08 00
	[  +1.015318] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 12 b8 6e 1b fb 93 de a2 46 23 bd 1e 08 00
	[  +1.023837] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 12 b8 6e 1b fb 93 de a2 46 23 bd 1e 08 00
	[  +1.023872] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 12 b8 6e 1b fb 93 de a2 46 23 bd 1e 08 00
	[  +1.023881] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 12 b8 6e 1b fb 93 de a2 46 23 bd 1e 08 00
	[  +1.023899] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 12 b8 6e 1b fb 93 de a2 46 23 bd 1e 08 00
	[  +2.047807] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: 12 b8 6e 1b fb 93 de a2 46 23 bd 1e 08 00
	[  +4.031540] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000022] ll header: 00000000: 12 b8 6e 1b fb 93 de a2 46 23 bd 1e 08 00
	[  +8.319118] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: 12 b8 6e 1b fb 93 de a2 46 23 bd 1e 08 00
	[ +16.382218] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 12 b8 6e 1b fb 93 de a2 46 23 bd 1e 08 00
	[Dec17 19:27] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 12 b8 6e 1b fb 93 de a2 46 23 bd 1e 08 00
	
	
	==> etcd [7ebf30f19d47c487560ee055e7d4d0fd371c54339f34e826c04697e4ebf5573f] <==
	{"level":"info","ts":"2025-12-17T19:59:28.840508Z","logger":"raft","caller":"v3@v3.6.0/raft.go:970","msg":"f23060b075c4c089 became leader at term 2"}
	{"level":"info","ts":"2025-12-17T19:59:28.840520Z","logger":"raft","caller":"v3@v3.6.0/node.go:370","msg":"raft.node: f23060b075c4c089 elected leader f23060b075c4c089 at term 2"}
	{"level":"info","ts":"2025-12-17T19:59:28.872732Z","caller":"etcdserver/server.go:1820","msg":"published local member to cluster through raft","local-member-id":"f23060b075c4c089","local-member-attributes":"{Name:no-preload-832842 ClientURLs:[https://192.168.103.2:2379]}","cluster-id":"3336683c081d149d","publish-timeout":"7s"}
	{"level":"info","ts":"2025-12-17T19:59:28.873371Z","caller":"etcdserver/server.go:2420","msg":"setting up initial cluster version using v3 API","cluster-version":"3.6"}
	{"level":"info","ts":"2025-12-17T19:59:28.873735Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-17T19:59:28.873941Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-17T19:59:28.874094Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-12-17T19:59:28.874115Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-12-17T19:59:28.874207Z","caller":"membership/cluster.go:682","msg":"set initial cluster version","cluster-id":"3336683c081d149d","local-member-id":"f23060b075c4c089","cluster-version":"3.6"}
	{"level":"info","ts":"2025-12-17T19:59:28.874381Z","caller":"api/capability.go:76","msg":"enabled capabilities for version","cluster-version":"3.6"}
	{"level":"info","ts":"2025-12-17T19:59:28.874450Z","caller":"version/monitor.go:116","msg":"cluster version differs from storage version.","cluster-version":"3.6.0","storage-version":"3.5.0"}
	{"level":"info","ts":"2025-12-17T19:59:28.874649Z","caller":"schema/migration.go:65","msg":"updated storage version","new-storage-version":"3.6.0"}
	{"level":"info","ts":"2025-12-17T19:59:28.875186Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-17T19:59:28.876252Z","caller":"etcdserver/server.go:2440","msg":"cluster version is updated","cluster-version":"3.6"}
	{"level":"info","ts":"2025-12-17T19:59:28.880104Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-17T19:59:28.889346Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.103.2:2379"}
	{"level":"info","ts":"2025-12-17T19:59:28.889340Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"warn","ts":"2025-12-17T19:59:34.606396Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"203.034195ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-17T19:59:34.606590Z","caller":"traceutil/trace.go:172","msg":"trace[232581423] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:320; }","duration":"203.257889ms","start":"2025-12-17T19:59:34.403308Z","end":"2025-12-17T19:59:34.606566Z","steps":["trace[232581423] 'agreement among raft nodes before linearized reading'  (duration: 76.185596ms)","trace[232581423] 'range keys from in-memory index tree'  (duration: 126.829561ms)"],"step_count":2}
	{"level":"warn","ts":"2025-12-17T19:59:34.606909Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"126.925383ms","expected-duration":"100ms","prefix":"","request":"header:<ID:13873790748516354848 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/pods/kube-system/kube-controller-manager-no-preload-832842\" mod_revision:316 > success:<request_put:<key:\"/registry/pods/kube-system/kube-controller-manager-no-preload-832842\" value_size:7437 >> failure:<request_range:<key:\"/registry/pods/kube-system/kube-controller-manager-no-preload-832842\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-12-17T19:59:34.607044Z","caller":"traceutil/trace.go:172","msg":"trace[496904784] linearizableReadLoop","detail":"{readStateIndex:333; appliedIndex:332; }","duration":"127.572415ms","start":"2025-12-17T19:59:34.479458Z","end":"2025-12-17T19:59:34.607030Z","steps":["trace[496904784] 'read index received'  (duration: 21.914µs)","trace[496904784] 'applied index is now lower than readState.Index'  (duration: 127.549682ms)"],"step_count":2}
	{"level":"warn","ts":"2025-12-17T19:59:34.607229Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"195.551604ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/root-ca-cert-publisher\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-17T19:59:34.607273Z","caller":"traceutil/trace.go:172","msg":"trace[323592243] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/root-ca-cert-publisher; range_end:; response_count:0; response_revision:321; }","duration":"195.606449ms","start":"2025-12-17T19:59:34.411659Z","end":"2025-12-17T19:59:34.607266Z","steps":["trace[323592243] 'agreement among raft nodes before linearized reading'  (duration: 195.523164ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-17T19:59:34.607228Z","caller":"traceutil/trace.go:172","msg":"trace[875144488] transaction","detail":"{read_only:false; response_revision:321; number_of_response:1; }","duration":"205.043769ms","start":"2025-12-17T19:59:34.402003Z","end":"2025-12-17T19:59:34.607047Z","steps":["trace[875144488] 'process raft request'  (duration: 77.531516ms)","trace[875144488] 'compare'  (duration: 126.723576ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-17T19:59:34.766776Z","caller":"traceutil/trace.go:172","msg":"trace[520697891] transaction","detail":"{read_only:false; response_revision:323; number_of_response:1; }","duration":"150.987826ms","start":"2025-12-17T19:59:34.615769Z","end":"2025-12-17T19:59:34.766757Z","steps":["trace[520697891] 'process raft request'  (duration: 149.498687ms)"],"step_count":1}
	
	
	==> kernel <==
	 20:00:03 up  1:42,  0 user,  load average: 2.54, 3.02, 2.18
	Linux no-preload-832842 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [b48b0ab37ba27e08aa68a997a563824b269db2d232c535e00d7e4cdc8233aa42] <==
	I1217 19:59:39.881006       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1217 19:59:39.881386       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1217 19:59:39.881598       1 main.go:148] setting mtu 1500 for CNI 
	I1217 19:59:39.881624       1 main.go:178] kindnetd IP family: "ipv4"
	I1217 19:59:39.881691       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-17T19:59:40Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1217 19:59:40.086377       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1217 19:59:40.086399       1 controller.go:381] "Waiting for informer caches to sync"
	I1217 19:59:40.086407       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1217 19:59:40.179660       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1217 19:59:40.487473       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1217 19:59:40.487495       1 metrics.go:72] Registering metrics
	I1217 19:59:40.487552       1 controller.go:711] "Syncing nftables rules"
	I1217 19:59:50.087800       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1217 19:59:50.087844       1 main.go:301] handling current node
	I1217 20:00:00.086649       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1217 20:00:00.086686       1 main.go:301] handling current node
	
	
	==> kube-apiserver [b57be1cdaa83bad7b0816ae850a0a39963b99c8093ebedbeba06f87e6041daa8] <==
	I1217 19:59:29.949354       1 policy_source.go:248] refreshing policies
	E1217 19:59:29.983676       1 controller.go:156] "Error while syncing ConfigMap" err="namespaces \"kube-system\" not found" logger="UnhandledError" configmap="kube-system/kube-apiserver-legacy-service-account-token-tracking"
	I1217 19:59:30.030012       1 controller.go:667] quota admission added evaluator for: namespaces
	I1217 19:59:30.046719       1 cidrallocator.go:302] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1217 19:59:30.046990       1 default_servicecidr_controller.go:231] Setting default ServiceCIDR condition Ready to True
	I1217 19:59:30.051035       1 cidrallocator.go:278] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1217 19:59:30.125107       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1217 19:59:30.829641       1 storage_scheduling.go:123] created PriorityClass system-node-critical with value 2000001000
	I1217 19:59:30.833394       1 storage_scheduling.go:123] created PriorityClass system-cluster-critical with value 2000000000
	I1217 19:59:30.833412       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I1217 19:59:31.313168       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1217 19:59:31.354424       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1217 19:59:31.434503       1 alloc.go:329] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1217 19:59:31.441523       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.103.2]
	I1217 19:59:31.442761       1 controller.go:667] quota admission added evaluator for: endpoints
	I1217 19:59:31.448320       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1217 19:59:31.864448       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1217 19:59:32.548273       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1217 19:59:32.556743       1 alloc.go:329] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1217 19:59:32.564098       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1217 19:59:37.318153       1 cidrallocator.go:278] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1217 19:59:37.322246       1 cidrallocator.go:278] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1217 19:59:37.517728       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1217 19:59:37.869364       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	E1217 20:00:01.479369       1 conn.go:339] Error on socket receive: read tcp 192.168.103.2:8443->192.168.103.1:37808: use of closed network connection
	
	
	==> kube-controller-manager [3ca2fc3220d242c7f78f83ae03b2e6d01dbbc70231037320d32f9e4725f0b8c2] <==
	I1217 19:59:36.674601       1 shared_informer.go:370] "Waiting for caches to sync"
	I1217 19:59:36.674607       1 shared_informer.go:377] "Caches are synced"
	I1217 19:59:36.674791       1 shared_informer.go:377] "Caches are synced"
	I1217 19:59:36.674805       1 shared_informer.go:377] "Caches are synced"
	I1217 19:59:36.675269       1 node_lifecycle_controller.go:1234] "Initializing eviction metric for zone" zone=""
	I1217 19:59:36.675446       1 shared_informer.go:377] "Caches are synced"
	I1217 19:59:36.675577       1 shared_informer.go:377] "Caches are synced"
	I1217 19:59:36.675776       1 shared_informer.go:377] "Caches are synced"
	I1217 19:59:36.675304       1 shared_informer.go:377] "Caches are synced"
	I1217 19:59:36.675486       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" node="no-preload-832842"
	I1217 19:59:36.676129       1 node_lifecycle_controller.go:1038] "Controller detected that all Nodes are not-Ready. Entering master disruption mode"
	I1217 19:59:36.676133       1 shared_informer.go:377] "Caches are synced"
	I1217 19:59:36.676190       1 shared_informer.go:377] "Caches are synced"
	I1217 19:59:36.676349       1 shared_informer.go:377] "Caches are synced"
	I1217 19:59:36.676318       1 shared_informer.go:377] "Caches are synced"
	I1217 19:59:36.676535       1 shared_informer.go:377] "Caches are synced"
	I1217 19:59:36.676963       1 shared_informer.go:377] "Caches are synced"
	I1217 19:59:36.679315       1 shared_informer.go:377] "Caches are synced"
	I1217 19:59:36.681902       1 range_allocator.go:433] "Set node PodCIDR" node="no-preload-832842" podCIDRs=["10.244.0.0/24"]
	I1217 19:59:36.682493       1 shared_informer.go:370] "Waiting for caches to sync"
	I1217 19:59:36.772391       1 shared_informer.go:377] "Caches are synced"
	I1217 19:59:36.772415       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1217 19:59:36.772419       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1217 19:59:36.782897       1 shared_informer.go:377] "Caches are synced"
	I1217 19:59:51.678541       1 node_lifecycle_controller.go:1057] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	
	
	==> kube-proxy [0a20d890935763b03ce8cf83a970e1887bbf79d818c1a4641cbdc75d2c10501b] <==
	I1217 19:59:38.297529       1 server_linux.go:53] "Using iptables proxy"
	I1217 19:59:38.370979       1 shared_informer.go:370] "Waiting for caches to sync"
	I1217 19:59:38.471786       1 shared_informer.go:377] "Caches are synced"
	I1217 19:59:38.471827       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.103.2"]
	E1217 19:59:38.471950       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1217 19:59:38.492866       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1217 19:59:38.492932       1 server_linux.go:136] "Using iptables Proxier"
	I1217 19:59:38.499559       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1217 19:59:38.500004       1 server.go:529] "Version info" version="v1.35.0-rc.1"
	I1217 19:59:38.500026       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1217 19:59:38.501316       1 config.go:200] "Starting service config controller"
	I1217 19:59:38.501337       1 config.go:403] "Starting serviceCIDR config controller"
	I1217 19:59:38.501359       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1217 19:59:38.501361       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1217 19:59:38.501404       1 config.go:106] "Starting endpoint slice config controller"
	I1217 19:59:38.501410       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1217 19:59:38.501417       1 config.go:309] "Starting node config controller"
	I1217 19:59:38.501427       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1217 19:59:38.501434       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1217 19:59:38.601529       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1217 19:59:38.601585       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1217 19:59:38.601616       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [7c7e2759db6a02d55992f079b3bc7375278c37bf39e1e463978f97c52340e1fa] <==
	E1217 19:59:29.880743       1 reflector.go:204] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.VolumeAttachment"
	E1217 19:59:29.880844       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceClaim"
	E1217 19:59:29.880921       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceSlice"
	E1217 19:59:29.881063       1 reflector.go:204] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.DeviceClass"
	E1217 19:59:29.881266       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIDriver"
	E1217 19:59:29.881278       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIStorageCapacity"
	E1217 19:59:29.881495       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Namespace"
	E1217 19:59:29.881541       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolumeClaim"
	E1217 19:59:29.881905       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StorageClass"
	E1217 19:59:29.881968       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StatefulSet"
	E1217 19:59:29.882046       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicationController"
	E1217 19:59:29.882123       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSINode"
	E1217 19:59:29.882103       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolume"
	E1217 19:59:30.708538       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolumeClaim"
	E1217 19:59:30.803307       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceClaim"
	E1217 19:59:30.808059       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIStorageCapacity"
	E1217 19:59:30.812917       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIDriver"
	E1217 19:59:30.832118       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceSlice"
	E1217 19:59:30.881769       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicaSet"
	E1217 19:59:30.938868       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicationController"
	E1217 19:59:30.949336       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSINode"
	E1217 19:59:31.091139       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Pod"
	E1217 19:59:31.127434       1 reflector.go:204] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.DeviceClass"
	E1217 19:59:31.276598       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1693" type="*v1.ConfigMap"
	I1217 19:59:33.375224       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Dec 17 19:59:37 no-preload-832842 kubelet[2201]: I1217 19:59:37.993902    2201 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/4d27aa06-e030-44a0-880e-a5ae02e7b951-cni-cfg\") pod \"kindnet-t5x5v\" (UID: \"4d27aa06-e030-44a0-880e-a5ae02e7b951\") " pod="kube-system/kindnet-t5x5v"
	Dec 17 19:59:37 no-preload-832842 kubelet[2201]: I1217 19:59:37.993943    2201 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4d27aa06-e030-44a0-880e-a5ae02e7b951-xtables-lock\") pod \"kindnet-t5x5v\" (UID: \"4d27aa06-e030-44a0-880e-a5ae02e7b951\") " pod="kube-system/kindnet-t5x5v"
	Dec 17 19:59:37 no-preload-832842 kubelet[2201]: I1217 19:59:37.993965    2201 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r7v22\" (UniqueName: \"kubernetes.io/projected/4d27aa06-e030-44a0-880e-a5ae02e7b951-kube-api-access-r7v22\") pod \"kindnet-t5x5v\" (UID: \"4d27aa06-e030-44a0-880e-a5ae02e7b951\") " pod="kube-system/kindnet-t5x5v"
	Dec 17 19:59:37 no-preload-832842 kubelet[2201]: I1217 19:59:37.994027    2201 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-29jmq\" (UniqueName: \"kubernetes.io/projected/5c5c87dc-6dbc-4133-9e90-d0650e6a5048-kube-api-access-29jmq\") pod \"kube-proxy-jc5dd\" (UID: \"5c5c87dc-6dbc-4133-9e90-d0650e6a5048\") " pod="kube-system/kube-proxy-jc5dd"
	Dec 17 19:59:37 no-preload-832842 kubelet[2201]: I1217 19:59:37.994119    2201 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4d27aa06-e030-44a0-880e-a5ae02e7b951-lib-modules\") pod \"kindnet-t5x5v\" (UID: \"4d27aa06-e030-44a0-880e-a5ae02e7b951\") " pod="kube-system/kindnet-t5x5v"
	Dec 17 19:59:37 no-preload-832842 kubelet[2201]: I1217 19:59:37.994163    2201 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5c5c87dc-6dbc-4133-9e90-d0650e6a5048-lib-modules\") pod \"kube-proxy-jc5dd\" (UID: \"5c5c87dc-6dbc-4133-9e90-d0650e6a5048\") " pod="kube-system/kube-proxy-jc5dd"
	Dec 17 19:59:37 no-preload-832842 kubelet[2201]: I1217 19:59:37.994240    2201 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/5c5c87dc-6dbc-4133-9e90-d0650e6a5048-kube-proxy\") pod \"kube-proxy-jc5dd\" (UID: \"5c5c87dc-6dbc-4133-9e90-d0650e6a5048\") " pod="kube-system/kube-proxy-jc5dd"
	Dec 17 19:59:40 no-preload-832842 kubelet[2201]: I1217 19:59:40.420490    2201 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kindnet-t5x5v" podStartSLOduration=2.029443175 podStartE2EDuration="3.420468132s" podCreationTimestamp="2025-12-17 19:59:37 +0000 UTC" firstStartedPulling="2025-12-17 19:59:38.213257526 +0000 UTC m=+5.927316315" lastFinishedPulling="2025-12-17 19:59:39.604282487 +0000 UTC m=+7.318341272" observedRunningTime="2025-12-17 19:59:40.42044389 +0000 UTC m=+8.134502683" watchObservedRunningTime="2025-12-17 19:59:40.420468132 +0000 UTC m=+8.134526927"
	Dec 17 19:59:40 no-preload-832842 kubelet[2201]: I1217 19:59:40.420840    2201 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-proxy-jc5dd" podStartSLOduration=3.420825385 podStartE2EDuration="3.420825385s" podCreationTimestamp="2025-12-17 19:59:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-17 19:59:38.411351837 +0000 UTC m=+6.125410629" watchObservedRunningTime="2025-12-17 19:59:40.420825385 +0000 UTC m=+8.134884178"
	Dec 17 19:59:43 no-preload-832842 kubelet[2201]: E1217 19:59:43.091308    2201 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-no-preload-832842" containerName="kube-apiserver"
	Dec 17 19:59:43 no-preload-832842 kubelet[2201]: E1217 19:59:43.766429    2201 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-no-preload-832842" containerName="kube-controller-manager"
	Dec 17 19:59:44 no-preload-832842 kubelet[2201]: E1217 19:59:44.675274    2201 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-no-preload-832842" containerName="etcd"
	Dec 17 19:59:47 no-preload-832842 kubelet[2201]: E1217 19:59:47.581342    2201 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-no-preload-832842" containerName="kube-scheduler"
	Dec 17 19:59:50 no-preload-832842 kubelet[2201]: I1217 19:59:50.596325    2201 kubelet_node_status.go:427] "Fast updating node status as it just became ready"
	Dec 17 19:59:50 no-preload-832842 kubelet[2201]: I1217 19:59:50.698755    2201 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h24kz\" (UniqueName: \"kubernetes.io/projected/2e2dabc4-5e32-46d9-a290-4dec02241395-kube-api-access-h24kz\") pod \"coredns-7d764666f9-988jw\" (UID: \"2e2dabc4-5e32-46d9-a290-4dec02241395\") " pod="kube-system/coredns-7d764666f9-988jw"
	Dec 17 19:59:50 no-preload-832842 kubelet[2201]: I1217 19:59:50.698809    2201 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/d36df8ec-ccab-401f-9ab8-c2a6c8f1e5ed-tmp\") pod \"storage-provisioner\" (UID: \"d36df8ec-ccab-401f-9ab8-c2a6c8f1e5ed\") " pod="kube-system/storage-provisioner"
	Dec 17 19:59:50 no-preload-832842 kubelet[2201]: I1217 19:59:50.698838    2201 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5t54w\" (UniqueName: \"kubernetes.io/projected/d36df8ec-ccab-401f-9ab8-c2a6c8f1e5ed-kube-api-access-5t54w\") pod \"storage-provisioner\" (UID: \"d36df8ec-ccab-401f-9ab8-c2a6c8f1e5ed\") " pod="kube-system/storage-provisioner"
	Dec 17 19:59:50 no-preload-832842 kubelet[2201]: I1217 19:59:50.698944    2201 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2e2dabc4-5e32-46d9-a290-4dec02241395-config-volume\") pod \"coredns-7d764666f9-988jw\" (UID: \"2e2dabc4-5e32-46d9-a290-4dec02241395\") " pod="kube-system/coredns-7d764666f9-988jw"
	Dec 17 19:59:51 no-preload-832842 kubelet[2201]: E1217 19:59:51.435209    2201 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-988jw" containerName="coredns"
	Dec 17 19:59:51 no-preload-832842 kubelet[2201]: I1217 19:59:51.444763    2201 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=13.444744166 podStartE2EDuration="13.444744166s" podCreationTimestamp="2025-12-17 19:59:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-17 19:59:51.444549489 +0000 UTC m=+19.158608284" watchObservedRunningTime="2025-12-17 19:59:51.444744166 +0000 UTC m=+19.158802959"
	Dec 17 19:59:51 no-preload-832842 kubelet[2201]: I1217 19:59:51.455687    2201 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/coredns-7d764666f9-988jw" podStartSLOduration=14.455666724 podStartE2EDuration="14.455666724s" podCreationTimestamp="2025-12-17 19:59:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-17 19:59:51.45546629 +0000 UTC m=+19.169525085" watchObservedRunningTime="2025-12-17 19:59:51.455666724 +0000 UTC m=+19.169725517"
	Dec 17 19:59:52 no-preload-832842 kubelet[2201]: E1217 19:59:52.436825    2201 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-988jw" containerName="coredns"
	Dec 17 19:59:53 no-preload-832842 kubelet[2201]: I1217 19:59:53.416984    2201 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mxm25\" (UniqueName: \"kubernetes.io/projected/71149176-ff99-466f-92c1-b41eec28d488-kube-api-access-mxm25\") pod \"busybox\" (UID: \"71149176-ff99-466f-92c1-b41eec28d488\") " pod="default/busybox"
	Dec 17 19:59:53 no-preload-832842 kubelet[2201]: E1217 19:59:53.438883    2201 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-988jw" containerName="coredns"
	Dec 17 20:00:01 no-preload-832842 kubelet[2201]: E1217 20:00:01.479297    2201 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:43628->127.0.0.1:45801: write tcp 127.0.0.1:43628->127.0.0.1:45801: write: broken pipe
	
	
	==> storage-provisioner [197e41df4fc34f5a6dc2f01b3ca7e796d67d104b668b6ccd5a31d5956a975d7d] <==
	I1217 19:59:50.984402       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1217 19:59:50.993026       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1217 19:59:50.993102       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1217 19:59:50.995266       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 19:59:51.000228       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1217 19:59:51.000370       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1217 19:59:51.000511       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-832842_af734c78-0c0d-4d9d-8814-252ce7e8f0e4!
	I1217 19:59:51.000474       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"dc57e620-3a27-4c8c-a77e-e1c5cd6ef8f6", APIVersion:"v1", ResourceVersion:"451", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-832842_af734c78-0c0d-4d9d-8814-252ce7e8f0e4 became leader
	W1217 19:59:51.002428       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 19:59:51.007383       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1217 19:59:51.101697       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-832842_af734c78-0c0d-4d9d-8814-252ce7e8f0e4!
	W1217 19:59:53.011439       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 19:59:53.016932       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 19:59:55.019910       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 19:59:55.023802       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 19:59:57.026830       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 19:59:57.031340       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 19:59:59.034186       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 19:59:59.038254       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 20:00:01.041662       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 20:00:01.048711       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 20:00:03.051592       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 20:00:03.055550       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-832842 -n no-preload-832842
helpers_test.go:270: (dbg) Run:  kubectl --context no-preload-832842 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/no-preload/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (2.38s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (2.37s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-894575 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-894575 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (256.488916ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T20:00:05Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-894575 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-894575 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context old-k8s-version-894575 describe deploy/metrics-server -n kube-system: exit status 1 (61.097574ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-894575 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect old-k8s-version-894575
helpers_test.go:244: (dbg) docker inspect old-k8s-version-894575:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "f5ebc1c53bc84c39ca57e291b3d376c12701623821efd7aa06f11ea9e9b21a6c",
	        "Created": "2025-12-17T19:59:10.569830275Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 613456,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-17T19:59:10.617066051Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:e3abeb065413b7566dd42e98e204ab3ad174790743f1f5cd427036c11b49d7f1",
	        "ResolvConfPath": "/var/lib/docker/containers/f5ebc1c53bc84c39ca57e291b3d376c12701623821efd7aa06f11ea9e9b21a6c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/f5ebc1c53bc84c39ca57e291b3d376c12701623821efd7aa06f11ea9e9b21a6c/hostname",
	        "HostsPath": "/var/lib/docker/containers/f5ebc1c53bc84c39ca57e291b3d376c12701623821efd7aa06f11ea9e9b21a6c/hosts",
	        "LogPath": "/var/lib/docker/containers/f5ebc1c53bc84c39ca57e291b3d376c12701623821efd7aa06f11ea9e9b21a6c/f5ebc1c53bc84c39ca57e291b3d376c12701623821efd7aa06f11ea9e9b21a6c-json.log",
	        "Name": "/old-k8s-version-894575",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-894575:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-894575",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "f5ebc1c53bc84c39ca57e291b3d376c12701623821efd7aa06f11ea9e9b21a6c",
	                "LowerDir": "/var/lib/docker/overlay2/cf0c071fa6be4c9c271a4ed41c01c193473d129d1f0cbb58862fb849a662aa72-init/diff:/var/lib/docker/overlay2/29727d664a8119dcd8d22d923cfdfa7d86f99088879bf2a113d907b51116eb38/diff",
	                "MergedDir": "/var/lib/docker/overlay2/cf0c071fa6be4c9c271a4ed41c01c193473d129d1f0cbb58862fb849a662aa72/merged",
	                "UpperDir": "/var/lib/docker/overlay2/cf0c071fa6be4c9c271a4ed41c01c193473d129d1f0cbb58862fb849a662aa72/diff",
	                "WorkDir": "/var/lib/docker/overlay2/cf0c071fa6be4c9c271a4ed41c01c193473d129d1f0cbb58862fb849a662aa72/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-894575",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-894575/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-894575",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-894575",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-894575",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "969df44ee6c025d9e1da0027c89a15b7ef239895f605cac39b2dfb2cd556e8cf",
	            "SandboxKey": "/var/run/docker/netns/969df44ee6c0",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33433"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33434"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33437"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33435"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33436"
	                    }
	                ]
	            },
	            "Networks": {
	                "old-k8s-version-894575": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "f0ce1019d98582b4ef902421b21faaa999552d06bbfa4979e1d39a9d27bb73b1",
	                    "EndpointID": "1e8402ace23ac800c918741df30ff8d4419b3d9c3c713ae3268be7a753d32c1c",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "MacAddress": "92:a9:d3:56:28:e2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-894575",
	                        "f5ebc1c53bc8"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-894575 -n old-k8s-version-894575
helpers_test.go:253: <<< TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-894575 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-894575 logs -n 25: (1.119997576s)
helpers_test.go:261: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ ssh     │ -p cilium-601560 sudo cri-dockerd --version                                                                                                                                                                                                   │ cilium-601560                │ jenkins │ v1.37.0 │ 17 Dec 25 19:58 UTC │                     │
	│ ssh     │ -p cilium-601560 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                                     │ cilium-601560                │ jenkins │ v1.37.0 │ 17 Dec 25 19:58 UTC │                     │
	│ ssh     │ -p cilium-601560 sudo systemctl cat containerd --no-pager                                                                                                                                                                                     │ cilium-601560                │ jenkins │ v1.37.0 │ 17 Dec 25 19:58 UTC │                     │
	│ ssh     │ -p cilium-601560 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                              │ cilium-601560                │ jenkins │ v1.37.0 │ 17 Dec 25 19:58 UTC │                     │
	│ ssh     │ -p cilium-601560 sudo cat /etc/containerd/config.toml                                                                                                                                                                                         │ cilium-601560                │ jenkins │ v1.37.0 │ 17 Dec 25 19:58 UTC │                     │
	│ ssh     │ -p cilium-601560 sudo containerd config dump                                                                                                                                                                                                  │ cilium-601560                │ jenkins │ v1.37.0 │ 17 Dec 25 19:58 UTC │                     │
	│ ssh     │ -p cilium-601560 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ cilium-601560                │ jenkins │ v1.37.0 │ 17 Dec 25 19:58 UTC │                     │
	│ ssh     │ -p cilium-601560 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ cilium-601560                │ jenkins │ v1.37.0 │ 17 Dec 25 19:58 UTC │                     │
	│ ssh     │ -p cilium-601560 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ cilium-601560                │ jenkins │ v1.37.0 │ 17 Dec 25 19:58 UTC │                     │
	│ ssh     │ -p cilium-601560 sudo crio config                                                                                                                                                                                                             │ cilium-601560                │ jenkins │ v1.37.0 │ 17 Dec 25 19:58 UTC │                     │
	│ delete  │ -p cilium-601560                                                                                                                                                                                                                              │ cilium-601560                │ jenkins │ v1.37.0 │ 17 Dec 25 19:58 UTC │ 17 Dec 25 19:58 UTC │
	│ start   │ -p cert-options-997440 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio                     │ cert-options-997440          │ jenkins │ v1.37.0 │ 17 Dec 25 19:58 UTC │ 17 Dec 25 19:59 UTC │
	│ stop    │ -p NoKubernetes-327438                                                                                                                                                                                                                        │ NoKubernetes-327438          │ jenkins │ v1.37.0 │ 17 Dec 25 19:58 UTC │ 17 Dec 25 19:58 UTC │
	│ start   │ -p NoKubernetes-327438 --driver=docker  --container-runtime=crio                                                                                                                                                                              │ NoKubernetes-327438          │ jenkins │ v1.37.0 │ 17 Dec 25 19:58 UTC │ 17 Dec 25 19:59 UTC │
	│ ssh     │ cert-options-997440 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-997440          │ jenkins │ v1.37.0 │ 17 Dec 25 19:59 UTC │ 17 Dec 25 19:59 UTC │
	│ ssh     │ -p cert-options-997440 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-997440          │ jenkins │ v1.37.0 │ 17 Dec 25 19:59 UTC │ 17 Dec 25 19:59 UTC │
	│ delete  │ -p cert-options-997440                                                                                                                                                                                                                        │ cert-options-997440          │ jenkins │ v1.37.0 │ 17 Dec 25 19:59 UTC │ 17 Dec 25 19:59 UTC │
	│ ssh     │ -p NoKubernetes-327438 sudo systemctl is-active --quiet service kubelet                                                                                                                                                                       │ NoKubernetes-327438          │ jenkins │ v1.37.0 │ 17 Dec 25 19:59 UTC │                     │
	│ start   │ -p old-k8s-version-894575 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-894575       │ jenkins │ v1.37.0 │ 17 Dec 25 19:59 UTC │ 17 Dec 25 19:59 UTC │
	│ delete  │ -p NoKubernetes-327438                                                                                                                                                                                                                        │ NoKubernetes-327438          │ jenkins │ v1.37.0 │ 17 Dec 25 19:59 UTC │ 17 Dec 25 19:59 UTC │
	│ delete  │ -p disable-driver-mounts-890254                                                                                                                                                                                                               │ disable-driver-mounts-890254 │ jenkins │ v1.37.0 │ 17 Dec 25 19:59 UTC │ 17 Dec 25 19:59 UTC │
	│ start   │ -p no-preload-832842 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1                                                                                  │ no-preload-832842            │ jenkins │ v1.37.0 │ 17 Dec 25 19:59 UTC │ 17 Dec 25 19:59 UTC │
	│ addons  │ enable metrics-server -p no-preload-832842 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-832842            │ jenkins │ v1.37.0 │ 17 Dec 25 20:00 UTC │                     │
	│ stop    │ -p no-preload-832842 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-832842            │ jenkins │ v1.37.0 │ 17 Dec 25 20:00 UTC │                     │
	│ addons  │ enable metrics-server -p old-k8s-version-894575 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-894575       │ jenkins │ v1.37.0 │ 17 Dec 25 20:00 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/17 19:59:07
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1217 19:59:07.163690  613002 out.go:360] Setting OutFile to fd 1 ...
	I1217 19:59:07.163841  613002 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 19:59:07.163849  613002 out.go:374] Setting ErrFile to fd 2...
	I1217 19:59:07.163855  613002 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 19:59:07.164194  613002 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22186-372245/.minikube/bin
	I1217 19:59:07.164836  613002 out.go:368] Setting JSON to false
	I1217 19:59:07.166288  613002 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":6098,"bootTime":1765995449,"procs":289,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1217 19:59:07.166372  613002 start.go:143] virtualization: kvm guest
	I1217 19:59:07.171555  613002 out.go:179] * [no-preload-832842] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1217 19:59:07.173234  613002 notify.go:221] Checking for updates...
	I1217 19:59:07.173302  613002 out.go:179]   - MINIKUBE_LOCATION=22186
	I1217 19:59:07.174663  613002 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 19:59:07.179613  613002 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22186-372245/kubeconfig
	I1217 19:59:07.181140  613002 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22186-372245/.minikube
	I1217 19:59:07.182442  613002 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1217 19:59:07.183702  613002 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 19:59:02.464936  596882 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1217 19:59:02.465384  596882 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 19:59:02.965029  596882 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1217 19:59:02.965493  596882 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 19:59:03.465143  596882 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1217 19:59:03.465610  596882 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 19:59:03.965216  596882 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1217 19:59:03.965667  596882 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 19:59:04.464948  596882 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1217 19:59:04.465470  596882 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 19:59:04.964954  596882 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1217 19:59:04.965420  596882 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 19:59:05.465113  596882 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1217 19:59:05.465545  596882 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 19:59:05.965163  596882 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1217 19:59:05.965549  596882 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 19:59:06.465209  596882 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1217 19:59:06.465648  596882 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 19:59:06.965176  596882 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1217 19:59:06.965625  596882 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 19:59:07.189252  613002 config.go:182] Loaded profile config "cert-expiration-059470": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 19:59:07.189411  613002 config.go:182] Loaded profile config "kubernetes-upgrade-322567": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1217 19:59:07.189554  613002 config.go:182] Loaded profile config "old-k8s-version-894575": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1217 19:59:07.189708  613002 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 19:59:07.217800  613002 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1217 19:59:07.217948  613002 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 19:59:07.283633  613002 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:62 OomKillDisable:false NGoroutines:81 SystemTime:2025-12-17 19:59:07.2713645 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map
[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1217 19:59:07.283787  613002 docker.go:319] overlay module found
	I1217 19:59:07.285672  613002 out.go:179] * Using the docker driver based on user configuration
	I1217 19:59:07.286708  613002 start.go:309] selected driver: docker
	I1217 19:59:07.286724  613002 start.go:927] validating driver "docker" against <nil>
	I1217 19:59:07.286738  613002 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 19:59:07.287471  613002 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 19:59:07.346850  613002 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:62 OomKillDisable:false NGoroutines:81 SystemTime:2025-12-17 19:59:07.336017187 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1217 19:59:07.347017  613002 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1217 19:59:07.347269  613002 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1217 19:59:07.348945  613002 out.go:179] * Using Docker driver with root privileges
	I1217 19:59:07.350157  613002 cni.go:84] Creating CNI manager for ""
	I1217 19:59:07.350254  613002 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1217 19:59:07.350270  613002 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1217 19:59:07.350385  613002 start.go:353] cluster config:
	{Name:no-preload-832842 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:no-preload-832842 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: S
SHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 19:59:07.351786  613002 out.go:179] * Starting "no-preload-832842" primary control-plane node in "no-preload-832842" cluster
	I1217 19:59:07.352940  613002 cache.go:134] Beginning downloading kic base image for docker with crio
	I1217 19:59:07.354124  613002 out.go:179] * Pulling base image v0.0.48-1765966054-22186 ...
	I1217 19:59:07.355110  613002 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime crio
	I1217 19:59:07.355201  613002 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 in local docker daemon
	I1217 19:59:07.355218  613002 profile.go:143] Saving config to /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/no-preload-832842/config.json ...
	I1217 19:59:07.355251  613002 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/no-preload-832842/config.json: {Name:mke41a27585b2fe600b2f3d48e81fa7a9c8fa347 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 19:59:07.355399  613002 cache.go:107] acquiring lock: {Name:mkcbe01b68b1228540a4060035e71f760b6eb215 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 19:59:07.355397  613002 cache.go:107] acquiring lock: {Name:mkf47f2e6c696152682e65be33119c2f43b3bb74 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 19:59:07.355439  613002 cache.go:107] acquiring lock: {Name:mk771abb5794f06a8d4c1ae0daf61ddb16c9a0d5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 19:59:07.355460  613002 cache.go:107] acquiring lock: {Name:mk74d4b3a0b59766e169c7e12524465d5725aec1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 19:59:07.355486  613002 cache.go:115] /home/jenkins/minikube-integration/22186-372245/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1217 19:59:07.355497  613002 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/22186-372245/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 105.388µs
	I1217 19:59:07.355515  613002 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/22186-372245/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1217 19:59:07.355491  613002 cache.go:107] acquiring lock: {Name:mk098c0851fafa2f04384b394b02f76db8624c86 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 19:59:07.355531  613002 cache.go:107] acquiring lock: {Name:mk151219bf56732e207466095277e35e24e25e44 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 19:59:07.355537  613002 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	I1217 19:59:07.355551  613002 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.6-0
	I1217 19:59:07.355571  613002 cache.go:107] acquiring lock: {Name:mk3531dda110c99b8d236ae9f26b1d573c3696cc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 19:59:07.355635  613002 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	I1217 19:59:07.355672  613002 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	I1217 19:59:07.355684  613002 cache.go:115] /home/jenkins/minikube-integration/22186-372245/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 exists
	I1217 19:59:07.355665  613002 cache.go:107] acquiring lock: {Name:mk1bb362c47f07be5bf19f353c27e03a385bbbad Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 19:59:07.355695  613002 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/22186-372245/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1" took 171.567µs
	I1217 19:59:07.355713  613002 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/22186-372245/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 succeeded
	I1217 19:59:07.355847  613002 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.13.1
	I1217 19:59:07.356023  613002 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.35.0-rc.1
	I1217 19:59:07.356947  613002 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.6-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.6-0
	I1217 19:59:07.356959  613002 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.35.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	I1217 19:59:07.356959  613002 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.35.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.35.0-rc.1
	I1217 19:59:07.356966  613002 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.13.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.13.1
	I1217 19:59:07.356959  613002 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.35.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	I1217 19:59:07.356947  613002 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.35.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	I1217 19:59:07.379591  613002 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 in local docker daemon, skipping pull
	I1217 19:59:07.379613  613002 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 exists in daemon, skipping load
	I1217 19:59:07.379629  613002 cache.go:243] Successfully downloaded all kic artifacts
	I1217 19:59:07.379664  613002 start.go:360] acquireMachinesLock for no-preload-832842: {Name:mka72685b85221388ed3605f67ec1d1d5d2a5266 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 19:59:07.379771  613002 start.go:364] duration metric: took 83.12µs to acquireMachinesLock for "no-preload-832842"
	I1217 19:59:07.379803  613002 start.go:93] Provisioning new machine with config: &{Name:no-preload-832842 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:no-preload-832842 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Custom
QemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1217 19:59:07.379894  613002 start.go:125] createHost starting for "" (driver="docker")
	I1217 19:59:05.082608  612025 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1217 19:59:05.082874  612025 start.go:159] libmachine.API.Create for "old-k8s-version-894575" (driver="docker")
	I1217 19:59:05.082908  612025 client.go:173] LocalClient.Create starting
	I1217 19:59:05.082981  612025 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22186-372245/.minikube/certs/ca.pem
	I1217 19:59:05.083017  612025 main.go:143] libmachine: Decoding PEM data...
	I1217 19:59:05.083035  612025 main.go:143] libmachine: Parsing certificate...
	I1217 19:59:05.083123  612025 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22186-372245/.minikube/certs/cert.pem
	I1217 19:59:05.083148  612025 main.go:143] libmachine: Decoding PEM data...
	I1217 19:59:05.083178  612025 main.go:143] libmachine: Parsing certificate...
	I1217 19:59:05.083543  612025 cli_runner.go:164] Run: docker network inspect old-k8s-version-894575 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1217 19:59:05.102177  612025 cli_runner.go:211] docker network inspect old-k8s-version-894575 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1217 19:59:05.102261  612025 network_create.go:284] running [docker network inspect old-k8s-version-894575] to gather additional debugging logs...
	I1217 19:59:05.102288  612025 cli_runner.go:164] Run: docker network inspect old-k8s-version-894575
	W1217 19:59:05.120764  612025 cli_runner.go:211] docker network inspect old-k8s-version-894575 returned with exit code 1
	I1217 19:59:05.120819  612025 network_create.go:287] error running [docker network inspect old-k8s-version-894575]: docker network inspect old-k8s-version-894575: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network old-k8s-version-894575 not found
	I1217 19:59:05.120840  612025 network_create.go:289] output of [docker network inspect old-k8s-version-894575]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network old-k8s-version-894575 not found
	
	** /stderr **
	I1217 19:59:05.121048  612025 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1217 19:59:05.140594  612025 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-f64340259533 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:f6:0a:32:70:0d:35} reservation:<nil>}
	I1217 19:59:05.141466  612025 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-67abe6566c60 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:42:82:43:08:7c:e3} reservation:<nil>}
	I1217 19:59:05.141969  612025 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-f76d03f2ebfd IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:8e:bb:9b:fb:af:46} reservation:<nil>}
	I1217 19:59:05.142694  612025 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-4c731e2a052d IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:4e:e6:a7:52:2c:69} reservation:<nil>}
	I1217 19:59:05.143859  612025 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001ea19e0}
	I1217 19:59:05.143893  612025 network_create.go:124] attempt to create docker network old-k8s-version-894575 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1217 19:59:05.143953  612025 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=old-k8s-version-894575 old-k8s-version-894575
	I1217 19:59:05.198298  612025 network_create.go:108] docker network old-k8s-version-894575 192.168.85.0/24 created
	I1217 19:59:05.198337  612025 kic.go:121] calculated static IP "192.168.85.2" for the "old-k8s-version-894575" container
	I1217 19:59:05.198452  612025 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1217 19:59:05.216965  612025 cli_runner.go:164] Run: docker volume create old-k8s-version-894575 --label name.minikube.sigs.k8s.io=old-k8s-version-894575 --label created_by.minikube.sigs.k8s.io=true
	I1217 19:59:05.236367  612025 oci.go:103] Successfully created a docker volume old-k8s-version-894575
	I1217 19:59:05.236468  612025 cli_runner.go:164] Run: docker run --rm --name old-k8s-version-894575-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-894575 --entrypoint /usr/bin/test -v old-k8s-version-894575:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 -d /var/lib
	I1217 19:59:05.657528  612025 oci.go:107] Successfully prepared a docker volume old-k8s-version-894575
	I1217 19:59:05.657610  612025 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1217 19:59:05.657626  612025 kic.go:194] Starting extracting preloaded images to volume ...
	I1217 19:59:05.657688  612025 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22186-372245/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-894575:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 -I lz4 -xf /preloaded.tar -C /extractDir
	I1217 19:59:07.382335  613002 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1217 19:59:07.382587  613002 start.go:159] libmachine.API.Create for "no-preload-832842" (driver="docker")
	I1217 19:59:07.382621  613002 client.go:173] LocalClient.Create starting
	I1217 19:59:07.382683  613002 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22186-372245/.minikube/certs/ca.pem
	I1217 19:59:07.382719  613002 main.go:143] libmachine: Decoding PEM data...
	I1217 19:59:07.382742  613002 main.go:143] libmachine: Parsing certificate...
	I1217 19:59:07.382824  613002 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22186-372245/.minikube/certs/cert.pem
	I1217 19:59:07.382863  613002 main.go:143] libmachine: Decoding PEM data...
	I1217 19:59:07.382878  613002 main.go:143] libmachine: Parsing certificate...
	I1217 19:59:07.383332  613002 cli_runner.go:164] Run: docker network inspect no-preload-832842 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1217 19:59:07.403140  613002 cli_runner.go:211] docker network inspect no-preload-832842 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1217 19:59:07.403234  613002 network_create.go:284] running [docker network inspect no-preload-832842] to gather additional debugging logs...
	I1217 19:59:07.403256  613002 cli_runner.go:164] Run: docker network inspect no-preload-832842
	W1217 19:59:07.421450  613002 cli_runner.go:211] docker network inspect no-preload-832842 returned with exit code 1
	I1217 19:59:07.421491  613002 network_create.go:287] error running [docker network inspect no-preload-832842]: docker network inspect no-preload-832842: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network no-preload-832842 not found
	I1217 19:59:07.421503  613002 network_create.go:289] output of [docker network inspect no-preload-832842]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network no-preload-832842 not found
	
	** /stderr **
	I1217 19:59:07.421583  613002 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1217 19:59:07.442988  613002 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-f64340259533 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:f6:0a:32:70:0d:35} reservation:<nil>}
	I1217 19:59:07.443845  613002 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-67abe6566c60 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:42:82:43:08:7c:e3} reservation:<nil>}
	I1217 19:59:07.444368  613002 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-f76d03f2ebfd IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:8e:bb:9b:fb:af:46} reservation:<nil>}
	I1217 19:59:07.444869  613002 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-4c731e2a052d IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:4e:e6:a7:52:2c:69} reservation:<nil>}
	I1217 19:59:07.445506  613002 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-f0ce1019d985 IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:26:5a:f7:51:9a:55} reservation:<nil>}
	I1217 19:59:07.445916  613002 network.go:211] skipping subnet 192.168.94.0/24 that is taken: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName:br-a8fdc05f236b IfaceIPv4:192.168.94.1 IfaceMTU:1500 IfaceMAC:3e:59:80:d3:98:cc} reservation:<nil>}
	I1217 19:59:07.446618  613002 network.go:206] using free private subnet 192.168.103.0/24: &{IP:192.168.103.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.103.0/24 Gateway:192.168.103.1 ClientMin:192.168.103.2 ClientMax:192.168.103.254 Broadcast:192.168.103.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001789860}
	I1217 19:59:07.446639  613002 network_create.go:124] attempt to create docker network no-preload-832842 192.168.103.0/24 with gateway 192.168.103.1 and MTU of 1500 ...
	I1217 19:59:07.446685  613002 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.103.0/24 --gateway=192.168.103.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=no-preload-832842 no-preload-832842
	I1217 19:59:07.497988  613002 network_create.go:108] docker network no-preload-832842 192.168.103.0/24 created
	I1217 19:59:07.498030  613002 kic.go:121] calculated static IP "192.168.103.2" for the "no-preload-832842" container
	I1217 19:59:07.498167  613002 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1217 19:59:07.504784  613002 cache.go:162] opening:  /home/jenkins/minikube-integration/22186-372245/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1
	I1217 19:59:07.508291  613002 cache.go:162] opening:  /home/jenkins/minikube-integration/22186-372245/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-rc.1
	I1217 19:59:07.513354  613002 cache.go:162] opening:  /home/jenkins/minikube-integration/22186-372245/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-rc.1
	I1217 19:59:07.517929  613002 cli_runner.go:164] Run: docker volume create no-preload-832842 --label name.minikube.sigs.k8s.io=no-preload-832842 --label created_by.minikube.sigs.k8s.io=true
	I1217 19:59:07.521995  613002 cache.go:162] opening:  /home/jenkins/minikube-integration/22186-372245/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-rc.1
	I1217 19:59:07.522679  613002 cache.go:162] opening:  /home/jenkins/minikube-integration/22186-372245/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.6-0
	I1217 19:59:07.525138  613002 cache.go:162] opening:  /home/jenkins/minikube-integration/22186-372245/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-rc.1
	I1217 19:59:07.537568  613002 oci.go:103] Successfully created a docker volume no-preload-832842
	I1217 19:59:07.537635  613002 cli_runner.go:164] Run: docker run --rm --name no-preload-832842-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-832842 --entrypoint /usr/bin/test -v no-preload-832842:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 -d /var/lib
	I1217 19:59:07.886133  613002 cache.go:157] /home/jenkins/minikube-integration/22186-372245/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-rc.1 exists
	I1217 19:59:07.886160  613002 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-rc.1" -> "/home/jenkins/minikube-integration/22186-372245/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-rc.1" took 530.72735ms
	I1217 19:59:07.886173  613002 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-rc.1 -> /home/jenkins/minikube-integration/22186-372245/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-rc.1 succeeded
	I1217 19:59:08.795301  613002 cache.go:157] /home/jenkins/minikube-integration/22186-372245/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-rc.1 exists
	I1217 19:59:08.795333  613002 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-rc.1" -> "/home/jenkins/minikube-integration/22186-372245/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-rc.1" took 1.439801855s
	I1217 19:59:08.795358  613002 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-rc.1 -> /home/jenkins/minikube-integration/22186-372245/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-rc.1 succeeded
	I1217 19:59:08.856112  613002 cache.go:157] /home/jenkins/minikube-integration/22186-372245/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-rc.1 exists
	I1217 19:59:08.856151  613002 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-rc.1" -> "/home/jenkins/minikube-integration/22186-372245/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-rc.1" took 1.500676672s
	I1217 19:59:08.856172  613002 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-rc.1 -> /home/jenkins/minikube-integration/22186-372245/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-rc.1 succeeded
	I1217 19:59:08.866708  613002 cache.go:157] /home/jenkins/minikube-integration/22186-372245/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.6-0 exists
	I1217 19:59:08.866747  613002 cache.go:96] cache image "registry.k8s.io/etcd:3.6.6-0" -> "/home/jenkins/minikube-integration/22186-372245/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.6-0" took 1.511285042s
	I1217 19:59:08.866766  613002 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.6-0 -> /home/jenkins/minikube-integration/22186-372245/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.6-0 succeeded
	I1217 19:59:08.882861  613002 cache.go:157] /home/jenkins/minikube-integration/22186-372245/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-rc.1 exists
	I1217 19:59:08.882894  613002 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-rc.1" -> "/home/jenkins/minikube-integration/22186-372245/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-rc.1" took 1.527516133s
	I1217 19:59:08.882911  613002 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-rc.1 -> /home/jenkins/minikube-integration/22186-372245/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-rc.1 succeeded
	I1217 19:59:08.920683  613002 cache.go:157] /home/jenkins/minikube-integration/22186-372245/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1 exists
	I1217 19:59:08.920719  613002 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "/home/jenkins/minikube-integration/22186-372245/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1" took 1.5651038s
	I1217 19:59:08.920735  613002 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> /home/jenkins/minikube-integration/22186-372245/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1 succeeded
	I1217 19:59:08.920755  613002 cache.go:87] Successfully saved all images to host disk.
	I1217 19:59:10.751842  613002 cli_runner.go:217] Completed: docker run --rm --name no-preload-832842-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-832842 --entrypoint /usr/bin/test -v no-preload-832842:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 -d /var/lib: (3.214156278s)
	I1217 19:59:10.751877  613002 oci.go:107] Successfully prepared a docker volume no-preload-832842
	I1217 19:59:10.751928  613002 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime crio
	W1217 19:59:10.752005  613002 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1217 19:59:10.752042  613002 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1217 19:59:10.752129  613002 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1217 19:59:10.816044  613002 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname no-preload-832842 --name no-preload-832842 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-832842 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=no-preload-832842 --network no-preload-832842 --ip 192.168.103.2 --volume no-preload-832842:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0
	I1217 19:59:11.106025  613002 cli_runner.go:164] Run: docker container inspect no-preload-832842 --format={{.State.Running}}
	I1217 19:59:11.129937  613002 cli_runner.go:164] Run: docker container inspect no-preload-832842 --format={{.State.Status}}
	I1217 19:59:11.157039  613002 cli_runner.go:164] Run: docker exec no-preload-832842 stat /var/lib/dpkg/alternatives/iptables
	I1217 19:59:11.207580  613002 oci.go:144] the created container "no-preload-832842" has a running status.
	I1217 19:59:11.207623  613002 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22186-372245/.minikube/machines/no-preload-832842/id_rsa...
	I1217 19:59:11.328962  613002 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22186-372245/.minikube/machines/no-preload-832842/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1217 19:59:11.361092  613002 cli_runner.go:164] Run: docker container inspect no-preload-832842 --format={{.State.Status}}
	I1217 19:59:11.384868  613002 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1217 19:59:11.384898  613002 kic_runner.go:114] Args: [docker exec --privileged no-preload-832842 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1217 19:59:11.432305  613002 cli_runner.go:164] Run: docker container inspect no-preload-832842 --format={{.State.Status}}
	I1217 19:59:11.460384  613002 machine.go:94] provisionDockerMachine start ...
	I1217 19:59:11.460491  613002 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-832842
	I1217 19:59:11.485214  613002 main.go:143] libmachine: Using SSH client type: native
	I1217 19:59:11.485606  613002 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33438 <nil> <nil>}
	I1217 19:59:11.485626  613002 main.go:143] libmachine: About to run SSH command:
	hostname
	I1217 19:59:11.641481  613002 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-832842
	
	I1217 19:59:11.641514  613002 ubuntu.go:182] provisioning hostname "no-preload-832842"
	I1217 19:59:11.641584  613002 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-832842
	I1217 19:59:11.664626  613002 main.go:143] libmachine: Using SSH client type: native
	I1217 19:59:11.666007  613002 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33438 <nil> <nil>}
	I1217 19:59:11.666046  613002 main.go:143] libmachine: About to run SSH command:
	sudo hostname no-preload-832842 && echo "no-preload-832842" | sudo tee /etc/hostname
	I1217 19:59:11.825329  613002 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-832842
	
	I1217 19:59:11.825437  613002 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-832842
	I1217 19:59:11.846118  613002 main.go:143] libmachine: Using SSH client type: native
	I1217 19:59:11.846361  613002 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33438 <nil> <nil>}
	I1217 19:59:11.846388  613002 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-832842' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-832842/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-832842' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1217 19:59:11.991193  613002 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1217 19:59:11.991226  613002 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22186-372245/.minikube CaCertPath:/home/jenkins/minikube-integration/22186-372245/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22186-372245/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22186-372245/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22186-372245/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22186-372245/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22186-372245/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22186-372245/.minikube}
	I1217 19:59:11.991283  613002 ubuntu.go:190] setting up certificates
	I1217 19:59:11.991300  613002 provision.go:84] configureAuth start
	I1217 19:59:11.991367  613002 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-832842
	I1217 19:59:12.009813  613002 provision.go:143] copyHostCerts
	I1217 19:59:12.009872  613002 exec_runner.go:144] found /home/jenkins/minikube-integration/22186-372245/.minikube/ca.pem, removing ...
	I1217 19:59:12.009881  613002 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22186-372245/.minikube/ca.pem
	I1217 19:59:12.009958  613002 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22186-372245/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22186-372245/.minikube/ca.pem (1082 bytes)
	I1217 19:59:12.010050  613002 exec_runner.go:144] found /home/jenkins/minikube-integration/22186-372245/.minikube/cert.pem, removing ...
	I1217 19:59:12.010058  613002 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22186-372245/.minikube/cert.pem
	I1217 19:59:12.010116  613002 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22186-372245/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22186-372245/.minikube/cert.pem (1123 bytes)
	I1217 19:59:12.010189  613002 exec_runner.go:144] found /home/jenkins/minikube-integration/22186-372245/.minikube/key.pem, removing ...
	I1217 19:59:12.010198  613002 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22186-372245/.minikube/key.pem
	I1217 19:59:12.010223  613002 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22186-372245/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22186-372245/.minikube/key.pem (1675 bytes)
	I1217 19:59:12.010278  613002 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22186-372245/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22186-372245/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22186-372245/.minikube/certs/ca-key.pem org=jenkins.no-preload-832842 san=[127.0.0.1 192.168.103.2 localhost minikube no-preload-832842]
	I1217 19:59:07.465376  596882 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1217 19:59:12.197779  613002 provision.go:177] copyRemoteCerts
	I1217 19:59:12.197839  613002 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1217 19:59:12.197880  613002 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-832842
	I1217 19:59:12.216040  613002 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33438 SSHKeyPath:/home/jenkins/minikube-integration/22186-372245/.minikube/machines/no-preload-832842/id_rsa Username:docker}
	I1217 19:59:12.318758  613002 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1217 19:59:12.338513  613002 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1217 19:59:12.357237  613002 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1217 19:59:12.375227  613002 provision.go:87] duration metric: took 383.904667ms to configureAuth
	I1217 19:59:12.375269  613002 ubuntu.go:206] setting minikube options for container-runtime
	I1217 19:59:12.375433  613002 config.go:182] Loaded profile config "no-preload-832842": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1217 19:59:12.375535  613002 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-832842
	I1217 19:59:12.393214  613002 main.go:143] libmachine: Using SSH client type: native
	I1217 19:59:12.393468  613002 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33438 <nil> <nil>}
	I1217 19:59:12.393494  613002 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1217 19:59:12.685460  613002 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1217 19:59:12.685487  613002 machine.go:97] duration metric: took 1.225074671s to provisionDockerMachine
	I1217 19:59:12.685498  613002 client.go:176] duration metric: took 5.3028708s to LocalClient.Create
	I1217 19:59:12.685518  613002 start.go:167] duration metric: took 5.302932427s to libmachine.API.Create "no-preload-832842"
	I1217 19:59:12.685527  613002 start.go:293] postStartSetup for "no-preload-832842" (driver="docker")
	I1217 19:59:12.685548  613002 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1217 19:59:12.685624  613002 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1217 19:59:12.685665  613002 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-832842
	I1217 19:59:12.704175  613002 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33438 SSHKeyPath:/home/jenkins/minikube-integration/22186-372245/.minikube/machines/no-preload-832842/id_rsa Username:docker}
	I1217 19:59:12.810348  613002 ssh_runner.go:195] Run: cat /etc/os-release
	I1217 19:59:12.814412  613002 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1217 19:59:12.814439  613002 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1217 19:59:12.814451  613002 filesync.go:126] Scanning /home/jenkins/minikube-integration/22186-372245/.minikube/addons for local assets ...
	I1217 19:59:12.814515  613002 filesync.go:126] Scanning /home/jenkins/minikube-integration/22186-372245/.minikube/files for local assets ...
	I1217 19:59:12.814588  613002 filesync.go:149] local asset: /home/jenkins/minikube-integration/22186-372245/.minikube/files/etc/ssl/certs/3757972.pem -> 3757972.pem in /etc/ssl/certs
	I1217 19:59:12.814682  613002 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1217 19:59:12.823021  613002 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/files/etc/ssl/certs/3757972.pem --> /etc/ssl/certs/3757972.pem (1708 bytes)
	I1217 19:59:12.845396  613002 start.go:296] duration metric: took 159.846518ms for postStartSetup
	I1217 19:59:12.845766  613002 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-832842
	I1217 19:59:12.863764  613002 profile.go:143] Saving config to /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/no-preload-832842/config.json ...
	I1217 19:59:12.864113  613002 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 19:59:12.864171  613002 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-832842
	I1217 19:59:12.881736  613002 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33438 SSHKeyPath:/home/jenkins/minikube-integration/22186-372245/.minikube/machines/no-preload-832842/id_rsa Username:docker}
	I1217 19:59:12.982690  613002 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1217 19:59:12.987360  613002 start.go:128] duration metric: took 5.607443467s to createHost
	I1217 19:59:12.987392  613002 start.go:83] releasing machines lock for "no-preload-832842", held for 5.607606655s
	I1217 19:59:12.987460  613002 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-832842
	I1217 19:59:13.005664  613002 ssh_runner.go:195] Run: cat /version.json
	I1217 19:59:13.005717  613002 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1217 19:59:13.005729  613002 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-832842
	I1217 19:59:13.005775  613002 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-832842
	I1217 19:59:13.025320  613002 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33438 SSHKeyPath:/home/jenkins/minikube-integration/22186-372245/.minikube/machines/no-preload-832842/id_rsa Username:docker}
	I1217 19:59:13.025439  613002 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33438 SSHKeyPath:/home/jenkins/minikube-integration/22186-372245/.minikube/machines/no-preload-832842/id_rsa Username:docker}
	I1217 19:59:13.177139  613002 ssh_runner.go:195] Run: systemctl --version
	I1217 19:59:13.183940  613002 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1217 19:59:13.219250  613002 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1217 19:59:13.223980  613002 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1217 19:59:13.224060  613002 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1217 19:59:13.250609  613002 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1217 19:59:13.250634  613002 start.go:496] detecting cgroup driver to use...
	I1217 19:59:13.250675  613002 detect.go:190] detected "systemd" cgroup driver on host os
	I1217 19:59:13.250726  613002 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1217 19:59:13.266745  613002 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1217 19:59:13.279174  613002 docker.go:218] disabling cri-docker service (if available) ...
	I1217 19:59:13.279236  613002 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1217 19:59:13.297069  613002 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1217 19:59:13.315051  613002 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1217 19:59:13.401680  613002 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1217 19:59:13.491655  613002 docker.go:234] disabling docker service ...
	I1217 19:59:13.491716  613002 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1217 19:59:13.512752  613002 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1217 19:59:13.526662  613002 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1217 19:59:13.608428  613002 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1217 19:59:13.693855  613002 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1217 19:59:13.707271  613002 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1217 19:59:13.722243  613002 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1217 19:59:13.722310  613002 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 19:59:13.732970  613002 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1217 19:59:13.733041  613002 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 19:59:13.742144  613002 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 19:59:13.751182  613002 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 19:59:13.759924  613002 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1217 19:59:13.768012  613002 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 19:59:13.777071  613002 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 19:59:13.791130  613002 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 19:59:13.800404  613002 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1217 19:59:13.808001  613002 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1217 19:59:13.815592  613002 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 19:59:13.900685  613002 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1217 19:59:14.046676  613002 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1217 19:59:14.046750  613002 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1217 19:59:14.051043  613002 start.go:564] Will wait 60s for crictl version
	I1217 19:59:14.051116  613002 ssh_runner.go:195] Run: which crictl
	I1217 19:59:14.054924  613002 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1217 19:59:14.080861  613002 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1217 19:59:14.080956  613002 ssh_runner.go:195] Run: crio --version
	I1217 19:59:14.109659  613002 ssh_runner.go:195] Run: crio --version
	I1217 19:59:14.140655  613002 out.go:179] * Preparing Kubernetes v1.35.0-rc.1 on CRI-O 1.34.3 ...
	I1217 19:59:10.486440  612025 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22186-372245/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-894575:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 -I lz4 -xf /preloaded.tar -C /extractDir: (4.828689754s)
	I1217 19:59:10.486482  612025 kic.go:203] duration metric: took 4.828851778s to extract preloaded images to volume ...
	W1217 19:59:10.486574  612025 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1217 19:59:10.486604  612025 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1217 19:59:10.486647  612025 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1217 19:59:10.547327  612025 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname old-k8s-version-894575 --name old-k8s-version-894575 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-894575 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=old-k8s-version-894575 --network old-k8s-version-894575 --ip 192.168.85.2 --volume old-k8s-version-894575:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0
	I1217 19:59:10.885351  612025 cli_runner.go:164] Run: docker container inspect old-k8s-version-894575 --format={{.State.Running}}
	I1217 19:59:10.906110  612025 cli_runner.go:164] Run: docker container inspect old-k8s-version-894575 --format={{.State.Status}}
	I1217 19:59:10.927145  612025 cli_runner.go:164] Run: docker exec old-k8s-version-894575 stat /var/lib/dpkg/alternatives/iptables
	I1217 19:59:10.987110  612025 oci.go:144] the created container "old-k8s-version-894575" has a running status.
	I1217 19:59:10.987161  612025 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22186-372245/.minikube/machines/old-k8s-version-894575/id_rsa...
	I1217 19:59:11.050534  612025 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22186-372245/.minikube/machines/old-k8s-version-894575/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1217 19:59:11.081072  612025 cli_runner.go:164] Run: docker container inspect old-k8s-version-894575 --format={{.State.Status}}
	I1217 19:59:11.102728  612025 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1217 19:59:11.102752  612025 kic_runner.go:114] Args: [docker exec --privileged old-k8s-version-894575 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1217 19:59:11.157152  612025 cli_runner.go:164] Run: docker container inspect old-k8s-version-894575 --format={{.State.Status}}
	I1217 19:59:11.180656  612025 machine.go:94] provisionDockerMachine start ...
	I1217 19:59:11.180828  612025 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-894575
	I1217 19:59:11.203202  612025 main.go:143] libmachine: Using SSH client type: native
	I1217 19:59:11.203572  612025 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33433 <nil> <nil>}
	I1217 19:59:11.203590  612025 main.go:143] libmachine: About to run SSH command:
	hostname
	I1217 19:59:11.204528  612025 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:55912->127.0.0.1:33433: read: connection reset by peer
	I1217 19:59:14.354070  612025 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-894575
	
	I1217 19:59:14.354285  612025 ubuntu.go:182] provisioning hostname "old-k8s-version-894575"
	I1217 19:59:14.355054  612025 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-894575
	I1217 19:59:14.381720  612025 main.go:143] libmachine: Using SSH client type: native
	I1217 19:59:14.382061  612025 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33433 <nil> <nil>}
	I1217 19:59:14.382091  612025 main.go:143] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-894575 && echo "old-k8s-version-894575" | sudo tee /etc/hostname
	I1217 19:59:14.560599  612025 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-894575
	
	I1217 19:59:14.560701  612025 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-894575
	I1217 19:59:14.580139  612025 main.go:143] libmachine: Using SSH client type: native
	I1217 19:59:14.580356  612025 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33433 <nil> <nil>}
	I1217 19:59:14.580373  612025 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-894575' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-894575/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-894575' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1217 19:59:14.732018  612025 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1217 19:59:14.732087  612025 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22186-372245/.minikube CaCertPath:/home/jenkins/minikube-integration/22186-372245/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22186-372245/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22186-372245/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22186-372245/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22186-372245/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22186-372245/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22186-372245/.minikube}
	I1217 19:59:14.732137  612025 ubuntu.go:190] setting up certificates
	I1217 19:59:14.732152  612025 provision.go:84] configureAuth start
	I1217 19:59:14.732233  612025 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-894575
	I1217 19:59:14.755418  612025 provision.go:143] copyHostCerts
	I1217 19:59:14.755500  612025 exec_runner.go:144] found /home/jenkins/minikube-integration/22186-372245/.minikube/cert.pem, removing ...
	I1217 19:59:14.755523  612025 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22186-372245/.minikube/cert.pem
	I1217 19:59:14.755618  612025 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22186-372245/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22186-372245/.minikube/cert.pem (1123 bytes)
	I1217 19:59:14.755773  612025 exec_runner.go:144] found /home/jenkins/minikube-integration/22186-372245/.minikube/key.pem, removing ...
	I1217 19:59:14.755789  612025 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22186-372245/.minikube/key.pem
	I1217 19:59:14.755838  612025 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22186-372245/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22186-372245/.minikube/key.pem (1675 bytes)
	I1217 19:59:14.755978  612025 exec_runner.go:144] found /home/jenkins/minikube-integration/22186-372245/.minikube/ca.pem, removing ...
	I1217 19:59:14.755994  612025 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22186-372245/.minikube/ca.pem
	I1217 19:59:14.756040  612025 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22186-372245/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22186-372245/.minikube/ca.pem (1082 bytes)
	I1217 19:59:14.756232  612025 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22186-372245/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22186-372245/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22186-372245/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-894575 san=[127.0.0.1 192.168.85.2 localhost minikube old-k8s-version-894575]
	I1217 19:59:14.848760  612025 provision.go:177] copyRemoteCerts
	I1217 19:59:14.848841  612025 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1217 19:59:14.848906  612025 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-894575
	I1217 19:59:14.873616  612025 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/22186-372245/.minikube/machines/old-k8s-version-894575/id_rsa Username:docker}
	I1217 19:59:14.142269  613002 cli_runner.go:164] Run: docker network inspect no-preload-832842 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1217 19:59:14.161365  613002 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1217 19:59:14.165671  613002 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 19:59:14.176304  613002 kubeadm.go:884] updating cluster {Name:no-preload-832842 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:no-preload-832842 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFi
rmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1217 19:59:14.176440  613002 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime crio
	I1217 19:59:14.176475  613002 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 19:59:14.202465  613002 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.35.0-rc.1". assuming images are not preloaded.
	I1217 19:59:14.202493  613002 cache_images.go:90] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.35.0-rc.1 registry.k8s.io/kube-controller-manager:v1.35.0-rc.1 registry.k8s.io/kube-scheduler:v1.35.0-rc.1 registry.k8s.io/kube-proxy:v1.35.0-rc.1 registry.k8s.io/pause:3.10.1 registry.k8s.io/etcd:3.6.6-0 registry.k8s.io/coredns/coredns:v1.13.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1217 19:59:14.202556  613002 image.go:138] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1217 19:59:14.202583  613002 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.35.0-rc.1
	I1217 19:59:14.202607  613002 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.6-0
	I1217 19:59:14.202633  613002 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	I1217 19:59:14.202640  613002 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1217 19:59:14.202681  613002 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	I1217 19:59:14.202701  613002 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.13.1
	I1217 19:59:14.202737  613002 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	I1217 19:59:14.203897  613002 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.35.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	I1217 19:59:14.203909  613002 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1217 19:59:14.203897  613002 image.go:181] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1217 19:59:14.203920  613002 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.35.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.35.0-rc.1
	I1217 19:59:14.203908  613002 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.35.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	I1217 19:59:14.203959  613002 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.6-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.6-0
	I1217 19:59:14.203997  613002 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.13.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.13.1
	I1217 19:59:14.204003  613002 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.35.0-rc.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	I1217 19:59:14.323055  613002 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	I1217 19:59:14.323436  613002 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.13.1
	I1217 19:59:14.326904  613002 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.6.6-0
	I1217 19:59:14.329417  613002 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	I1217 19:59:14.335829  613002 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	I1217 19:59:14.346226  613002 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.35.0-rc.1
	I1217 19:59:14.348116  613002 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10.1
	I1217 19:59:14.372870  613002 cache_images.go:118] "registry.k8s.io/kube-scheduler:v1.35.0-rc.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.35.0-rc.1" does not exist at hash "73f80cdc073daa4d501207f9e6dec1fa9eea5f27e8d347b8a0c4bad8811eecdc" in container runtime
	I1217 19:59:14.372935  613002 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	I1217 19:59:14.372988  613002 ssh_runner.go:195] Run: which crictl
	I1217 19:59:14.374889  613002 cache_images.go:118] "registry.k8s.io/coredns/coredns:v1.13.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.13.1" does not exist at hash "aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139" in container runtime
	I1217 19:59:14.374945  613002 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.13.1
	I1217 19:59:14.375003  613002 ssh_runner.go:195] Run: which crictl
	I1217 19:59:14.381999  613002 cache_images.go:118] "registry.k8s.io/etcd:3.6.6-0" needs transfer: "registry.k8s.io/etcd:3.6.6-0" does not exist at hash "0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2" in container runtime
	I1217 19:59:14.382042  613002 cri.go:218] Removing image: registry.k8s.io/etcd:3.6.6-0
	I1217 19:59:14.382111  613002 ssh_runner.go:195] Run: which crictl
	I1217 19:59:14.384169  613002 cache_images.go:118] "registry.k8s.io/kube-apiserver:v1.35.0-rc.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.35.0-rc.1" does not exist at hash "58865405a13bccac1d74bc3f446dddd22e6ef0d7ee8b52363c86dd31838976ce" in container runtime
	I1217 19:59:14.384215  613002 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	I1217 19:59:14.384265  613002 ssh_runner.go:195] Run: which crictl
	I1217 19:59:14.390916  613002 cache_images.go:118] "registry.k8s.io/kube-controller-manager:v1.35.0-rc.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.35.0-rc.1" does not exist at hash "5032a56602e1b9bd8856699701b6148aa1b9901d05b61f893df3b57f84aca614" in container runtime
	I1217 19:59:14.390979  613002 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	I1217 19:59:14.391038  613002 ssh_runner.go:195] Run: which crictl
	I1217 19:59:14.400541  613002 cache_images.go:118] "registry.k8s.io/kube-proxy:v1.35.0-rc.1" needs transfer: "registry.k8s.io/kube-proxy:v1.35.0-rc.1" does not exist at hash "af0321f3a4f388cfb978464739c323ebf891a7b0b50cdfd7179e92f141dad42a" in container runtime
	I1217 19:59:14.400598  613002 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.35.0-rc.1
	I1217 19:59:14.400628  613002 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	I1217 19:59:14.400553  613002 cache_images.go:118] "registry.k8s.io/pause:3.10.1" needs transfer: "registry.k8s.io/pause:3.10.1" does not exist at hash "cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f" in container runtime
	I1217 19:59:14.400651  613002 ssh_runner.go:195] Run: which crictl
	I1217 19:59:14.400676  613002 cri.go:218] Removing image: registry.k8s.io/pause:3.10.1
	I1217 19:59:14.400709  613002 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1217 19:59:14.400743  613002 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.6-0
	I1217 19:59:14.400770  613002 ssh_runner.go:195] Run: which crictl
	I1217 19:59:14.400792  613002 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	I1217 19:59:14.400809  613002 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	I1217 19:59:14.406289  613002 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-rc.1
	I1217 19:59:14.438613  613002 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	I1217 19:59:14.444038  613002 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1217 19:59:14.444071  613002 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.6-0
	I1217 19:59:14.444240  613002 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1217 19:59:14.444327  613002 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	I1217 19:59:14.444334  613002 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	I1217 19:59:14.448653  613002 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-rc.1
	I1217 19:59:14.478479  613002 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.35.0-rc.1
	I1217 19:59:14.482581  613002 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.6-0
	I1217 19:59:14.482813  613002 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1217 19:59:14.487002  613002 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
	I1217 19:59:14.487054  613002 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.35.0-rc.1
	I1217 19:59:14.487002  613002 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1217 19:59:14.487123  613002 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.35.0-rc.1
	I1217 19:59:14.517672  613002 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22186-372245/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-rc.1
	I1217 19:59:14.517786  613002 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.35.0-rc.1
	I1217 19:59:14.520371  613002 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22186-372245/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.6-0
	I1217 19:59:14.520488  613002 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.6-0
	I1217 19:59:14.521037  613002 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1217 19:59:14.527267  613002 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22186-372245/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-rc.1
	I1217 19:59:14.527304  613002 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22186-372245/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-rc.1
	I1217 19:59:14.527317  613002 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22186-372245/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1
	I1217 19:59:14.527347  613002 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22186-372245/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-rc.1
	I1217 19:59:14.527367  613002 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.35.0-rc.1
	I1217 19:59:14.527395  613002 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.35.0-rc.1
	I1217 19:59:14.527397  613002 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.35.0-rc.1: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.35.0-rc.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-scheduler_v1.35.0-rc.1': No such file or directory
	I1217 19:59:14.527426  613002 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.6.6-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.6-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.6.6-0': No such file or directory
	I1217 19:59:14.527437  613002 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-rc.1 --> /var/lib/minikube/images/kube-scheduler_v1.35.0-rc.1 (17248256 bytes)
	I1217 19:59:14.527449  613002 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.6-0 --> /var/lib/minikube/images/etcd_3.6.6-0 (23653376 bytes)
	I1217 19:59:14.527416  613002 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.35.0-rc.1
	I1217 19:59:14.527396  613002 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.13.1
	I1217 19:59:14.571040  613002 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22186-372245/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1
	I1217 19:59:14.571039  613002 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.35.0-rc.1: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.35.0-rc.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-proxy_v1.35.0-rc.1': No such file or directory
	I1217 19:59:14.571120  613002 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-rc.1 --> /var/lib/minikube/images/kube-proxy_v1.35.0-rc.1 (25791488 bytes)
	I1217 19:59:14.571280  613002 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1
	I1217 19:59:14.571289  613002 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.35.0-rc.1: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.35.0-rc.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-apiserver_v1.35.0-rc.1': No such file or directory
	I1217 19:59:14.571313  613002 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-rc.1 --> /var/lib/minikube/images/kube-apiserver_v1.35.0-rc.1 (27697152 bytes)
	I1217 19:59:14.571319  613002 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.35.0-rc.1: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.35.0-rc.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-controller-manager_v1.35.0-rc.1': No such file or directory
	I1217 19:59:14.571291  613002 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.13.1: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.13.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.13.1': No such file or directory
	I1217 19:59:14.571339  613002 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-rc.1 --> /var/lib/minikube/images/kube-controller-manager_v1.35.0-rc.1 (23144960 bytes)
	I1217 19:59:14.571342  613002 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1 --> /var/lib/minikube/images/coredns_v1.13.1 (23562752 bytes)
	I1217 19:59:14.699873  613002 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.10.1: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.10.1': No such file or directory
	I1217 19:59:14.699916  613002 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 --> /var/lib/minikube/images/pause_3.10.1 (321024 bytes)
	I1217 19:59:14.795465  613002 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.10.1
	I1217 19:59:14.795534  613002 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.10.1
	I1217 19:59:15.145896  613002 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1217 19:59:15.265623  613002 cache_images.go:118] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1217 19:59:15.265638  613002 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22186-372245/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 from cache
	I1217 19:59:15.265665  613002 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1217 19:59:15.265690  613002 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.35.0-rc.1
	I1217 19:59:15.265707  613002 ssh_runner.go:195] Run: which crictl
	I1217 19:59:15.265740  613002 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.35.0-rc.1
	I1217 19:59:16.405593  613002 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.35.0-rc.1: (1.139822545s)
	I1217 19:59:16.405633  613002 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22186-372245/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.35.0-rc.1 from cache
	I1217 19:59:16.405641  613002 ssh_runner.go:235] Completed: which crictl: (1.139907704s)
	I1217 19:59:16.405665  613002 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.6.6-0
	I1217 19:59:16.405708  613002 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.6.6-0
	I1217 19:59:16.405713  613002 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1217 19:59:12.465990  596882 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1217 19:59:12.466047  596882 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1217 19:59:14.988394  612025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1217 19:59:15.087322  612025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1217 19:59:15.192226  612025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1217 19:59:15.214454  612025 provision.go:87] duration metric: took 482.278515ms to configureAuth
	I1217 19:59:15.214498  612025 ubuntu.go:206] setting minikube options for container-runtime
	I1217 19:59:15.214720  612025 config.go:182] Loaded profile config "old-k8s-version-894575": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1217 19:59:15.214877  612025 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-894575
	I1217 19:59:15.238011  612025 main.go:143] libmachine: Using SSH client type: native
	I1217 19:59:15.238370  612025 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33433 <nil> <nil>}
	I1217 19:59:15.238399  612025 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1217 19:59:15.554712  612025 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1217 19:59:15.554746  612025 machine.go:97] duration metric: took 4.374010357s to provisionDockerMachine
	I1217 19:59:15.554760  612025 client.go:176] duration metric: took 10.471842965s to LocalClient.Create
	I1217 19:59:15.554781  612025 start.go:167] duration metric: took 10.471908055s to libmachine.API.Create "old-k8s-version-894575"
	I1217 19:59:15.554791  612025 start.go:293] postStartSetup for "old-k8s-version-894575" (driver="docker")
	I1217 19:59:15.554806  612025 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1217 19:59:15.554870  612025 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1217 19:59:15.554948  612025 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-894575
	I1217 19:59:15.574541  612025 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/22186-372245/.minikube/machines/old-k8s-version-894575/id_rsa Username:docker}
	I1217 19:59:15.683411  612025 ssh_runner.go:195] Run: cat /etc/os-release
	I1217 19:59:15.688001  612025 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1217 19:59:15.688037  612025 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1217 19:59:15.688052  612025 filesync.go:126] Scanning /home/jenkins/minikube-integration/22186-372245/.minikube/addons for local assets ...
	I1217 19:59:15.688146  612025 filesync.go:126] Scanning /home/jenkins/minikube-integration/22186-372245/.minikube/files for local assets ...
	I1217 19:59:15.688250  612025 filesync.go:149] local asset: /home/jenkins/minikube-integration/22186-372245/.minikube/files/etc/ssl/certs/3757972.pem -> 3757972.pem in /etc/ssl/certs
	I1217 19:59:15.688377  612025 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1217 19:59:15.697743  612025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/files/etc/ssl/certs/3757972.pem --> /etc/ssl/certs/3757972.pem (1708 bytes)
	I1217 19:59:15.722737  612025 start.go:296] duration metric: took 167.925036ms for postStartSetup
	I1217 19:59:15.723472  612025 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-894575
	I1217 19:59:15.748004  612025 profile.go:143] Saving config to /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/old-k8s-version-894575/config.json ...
	I1217 19:59:15.748362  612025 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 19:59:15.748429  612025 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-894575
	I1217 19:59:15.772713  612025 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/22186-372245/.minikube/machines/old-k8s-version-894575/id_rsa Username:docker}
	I1217 19:59:15.881299  612025 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1217 19:59:15.887512  612025 start.go:128] duration metric: took 10.806894995s to createHost
	I1217 19:59:15.887544  612025 start.go:83] releasing machines lock for "old-k8s-version-894575", held for 10.807110572s
	I1217 19:59:15.887620  612025 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-894575
	I1217 19:59:15.909226  612025 ssh_runner.go:195] Run: cat /version.json
	I1217 19:59:15.909243  612025 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1217 19:59:15.909291  612025 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-894575
	I1217 19:59:15.909343  612025 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-894575
	I1217 19:59:15.932854  612025 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/22186-372245/.minikube/machines/old-k8s-version-894575/id_rsa Username:docker}
	I1217 19:59:15.932917  612025 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/22186-372245/.minikube/machines/old-k8s-version-894575/id_rsa Username:docker}
	I1217 19:59:16.038284  612025 ssh_runner.go:195] Run: systemctl --version
	I1217 19:59:16.114179  612025 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1217 19:59:16.160800  612025 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1217 19:59:16.166297  612025 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1217 19:59:16.166372  612025 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1217 19:59:16.194230  612025 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1217 19:59:16.194262  612025 start.go:496] detecting cgroup driver to use...
	I1217 19:59:16.194302  612025 detect.go:190] detected "systemd" cgroup driver on host os
	I1217 19:59:16.194355  612025 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1217 19:59:16.212431  612025 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1217 19:59:16.226442  612025 docker.go:218] disabling cri-docker service (if available) ...
	I1217 19:59:16.226509  612025 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1217 19:59:16.245823  612025 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1217 19:59:16.264742  612025 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1217 19:59:16.373692  612025 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1217 19:59:16.476619  612025 docker.go:234] disabling docker service ...
	I1217 19:59:16.476681  612025 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1217 19:59:16.497483  612025 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1217 19:59:16.510960  612025 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1217 19:59:16.595568  612025 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1217 19:59:16.682352  612025 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1217 19:59:16.695356  612025 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1217 19:59:16.710247  612025 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1217 19:59:16.710309  612025 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 19:59:16.721638  612025 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1217 19:59:16.721703  612025 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 19:59:16.731590  612025 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 19:59:16.741533  612025 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 19:59:16.751481  612025 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1217 19:59:16.760217  612025 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 19:59:16.769347  612025 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 19:59:16.784142  612025 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 19:59:16.794546  612025 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1217 19:59:16.802330  612025 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1217 19:59:16.809983  612025 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 19:59:16.891283  612025 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1217 19:59:17.259133  612025 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1217 19:59:17.259215  612025 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1217 19:59:17.264522  612025 start.go:564] Will wait 60s for crictl version
	I1217 19:59:17.264597  612025 ssh_runner.go:195] Run: which crictl
	I1217 19:59:17.269021  612025 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1217 19:59:17.298337  612025 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1217 19:59:17.298427  612025 ssh_runner.go:195] Run: crio --version
	I1217 19:59:17.328846  612025 ssh_runner.go:195] Run: crio --version
	I1217 19:59:17.360505  612025 out.go:179] * Preparing Kubernetes v1.28.0 on CRI-O 1.34.3 ...
	I1217 19:59:17.361873  612025 cli_runner.go:164] Run: docker network inspect old-k8s-version-894575 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1217 19:59:17.379694  612025 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1217 19:59:17.383968  612025 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 19:59:17.395630  612025 kubeadm.go:884] updating cluster {Name:old-k8s-version-894575 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-894575 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1217 19:59:17.395751  612025 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1217 19:59:17.395793  612025 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 19:59:17.430165  612025 crio.go:514] all images are preloaded for cri-o runtime.
	I1217 19:59:17.430209  612025 crio.go:433] Images already preloaded, skipping extraction
	I1217 19:59:17.430270  612025 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 19:59:17.460476  612025 crio.go:514] all images are preloaded for cri-o runtime.
	I1217 19:59:17.460499  612025 cache_images.go:86] Images are preloaded, skipping loading
	I1217 19:59:17.460508  612025 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.28.0 crio true true} ...
	I1217 19:59:17.460592  612025 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=old-k8s-version-894575 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-894575 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1217 19:59:17.460662  612025 ssh_runner.go:195] Run: crio config
	I1217 19:59:17.514354  612025 cni.go:84] Creating CNI manager for ""
	I1217 19:59:17.514384  612025 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1217 19:59:17.514404  612025 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1217 19:59:17.514430  612025 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.28.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-894575 NodeName:old-k8s-version-894575 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPod
Path:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1217 19:59:17.514606  612025 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "old-k8s-version-894575"
	  kubeletExtraArgs:
	    node-ip: 192.168.85.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1217 19:59:17.514689  612025 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.0
	I1217 19:59:17.523692  612025 binaries.go:51] Found k8s binaries, skipping transfer
	I1217 19:59:17.523768  612025 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1217 19:59:17.534543  612025 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1217 19:59:17.550669  612025 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1217 19:59:17.570716  612025 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2159 bytes)
	I1217 19:59:17.586529  612025 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1217 19:59:17.591490  612025 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 19:59:17.604319  612025 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 19:59:17.709931  612025 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 19:59:17.734439  612025 certs.go:69] Setting up /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/old-k8s-version-894575 for IP: 192.168.85.2
	I1217 19:59:17.734464  612025 certs.go:195] generating shared ca certs ...
	I1217 19:59:17.734487  612025 certs.go:227] acquiring lock for ca certs: {Name:mk6c0a4a99609de13fb0b54aca94f9165cc7856c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 19:59:17.734640  612025 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22186-372245/.minikube/ca.key
	I1217 19:59:17.734689  612025 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22186-372245/.minikube/proxy-client-ca.key
	I1217 19:59:17.734696  612025 certs.go:257] generating profile certs ...
	I1217 19:59:17.734746  612025 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/old-k8s-version-894575/client.key
	I1217 19:59:17.734761  612025 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/old-k8s-version-894575/client.crt with IP's: []
	I1217 19:59:17.791783  612025 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/old-k8s-version-894575/client.crt ...
	I1217 19:59:17.791823  612025 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/old-k8s-version-894575/client.crt: {Name:mka3e7404d2bf2be2c2ad017710d4ae4c61748c4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 19:59:17.792047  612025 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/old-k8s-version-894575/client.key ...
	I1217 19:59:17.792066  612025 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/old-k8s-version-894575/client.key: {Name:mk24de6b250e021196965dc5b704e038970df7f5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 19:59:17.792243  612025 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/old-k8s-version-894575/apiserver.key.42d7654d
	I1217 19:59:17.792271  612025 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/old-k8s-version-894575/apiserver.crt.42d7654d with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1217 19:59:17.899874  612025 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/old-k8s-version-894575/apiserver.crt.42d7654d ...
	I1217 19:59:17.899903  612025 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/old-k8s-version-894575/apiserver.crt.42d7654d: {Name:mke7171f32e441ac885f6f108f6ca622009b6054 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 19:59:17.900116  612025 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/old-k8s-version-894575/apiserver.key.42d7654d ...
	I1217 19:59:17.900135  612025 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/old-k8s-version-894575/apiserver.key.42d7654d: {Name:mke8abbbd0ca8f8ca55c39333f965fe1dc236d23 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 19:59:17.900248  612025 certs.go:382] copying /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/old-k8s-version-894575/apiserver.crt.42d7654d -> /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/old-k8s-version-894575/apiserver.crt
	I1217 19:59:17.900344  612025 certs.go:386] copying /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/old-k8s-version-894575/apiserver.key.42d7654d -> /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/old-k8s-version-894575/apiserver.key
	I1217 19:59:17.900408  612025 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/old-k8s-version-894575/proxy-client.key
	I1217 19:59:17.900424  612025 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/old-k8s-version-894575/proxy-client.crt with IP's: []
	I1217 19:59:18.020162  612025 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/old-k8s-version-894575/proxy-client.crt ...
	I1217 19:59:18.020199  612025 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/old-k8s-version-894575/proxy-client.crt: {Name:mkf6fd390f5ac409002c1ff65bfc5b799802f031 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 19:59:18.020412  612025 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/old-k8s-version-894575/proxy-client.key ...
	I1217 19:59:18.020441  612025 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/old-k8s-version-894575/proxy-client.key: {Name:mk25f53b65f4ff150cc6249c6126fc63cd51dc02 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 19:59:18.020631  612025 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-372245/.minikube/certs/375797.pem (1338 bytes)
	W1217 19:59:18.020672  612025 certs.go:480] ignoring /home/jenkins/minikube-integration/22186-372245/.minikube/certs/375797_empty.pem, impossibly tiny 0 bytes
	I1217 19:59:18.020679  612025 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-372245/.minikube/certs/ca-key.pem (1675 bytes)
	I1217 19:59:18.020705  612025 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-372245/.minikube/certs/ca.pem (1082 bytes)
	I1217 19:59:18.020727  612025 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-372245/.minikube/certs/cert.pem (1123 bytes)
	I1217 19:59:18.020751  612025 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-372245/.minikube/certs/key.pem (1675 bytes)
	I1217 19:59:18.020791  612025 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-372245/.minikube/files/etc/ssl/certs/3757972.pem (1708 bytes)
	I1217 19:59:18.021428  612025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1217 19:59:18.040881  612025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1217 19:59:18.059705  612025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1217 19:59:18.079896  612025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1217 19:59:18.100632  612025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/old-k8s-version-894575/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1217 19:59:18.121337  612025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/old-k8s-version-894575/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1217 19:59:18.141038  612025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/old-k8s-version-894575/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1217 19:59:18.161305  612025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/old-k8s-version-894575/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1217 19:59:18.182152  612025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 19:59:18.202628  612025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/certs/375797.pem --> /usr/share/ca-certificates/375797.pem (1338 bytes)
	I1217 19:59:18.223178  612025 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/files/etc/ssl/certs/3757972.pem --> /usr/share/ca-certificates/3757972.pem (1708 bytes)
	I1217 19:59:18.242163  612025 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1217 19:59:18.255610  612025 ssh_runner.go:195] Run: openssl version
	I1217 19:59:18.262101  612025 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1217 19:59:18.270728  612025 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1217 19:59:18.278801  612025 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1217 19:59:18.283421  612025 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 19:24 /usr/share/ca-certificates/minikubeCA.pem
	I1217 19:59:18.283487  612025 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1217 19:59:18.317952  612025 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1217 19:59:18.326465  612025 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1217 19:59:18.334353  612025 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/375797.pem
	I1217 19:59:18.343067  612025 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/375797.pem /etc/ssl/certs/375797.pem
	I1217 19:59:18.352047  612025 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/375797.pem
	I1217 19:59:18.356406  612025 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 17 19:32 /usr/share/ca-certificates/375797.pem
	I1217 19:59:18.356471  612025 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/375797.pem
	I1217 19:59:18.391443  612025 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1217 19:59:18.400089  612025 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/375797.pem /etc/ssl/certs/51391683.0
	I1217 19:59:18.408903  612025 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3757972.pem
	I1217 19:59:18.417744  612025 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3757972.pem /etc/ssl/certs/3757972.pem
	I1217 19:59:18.427264  612025 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3757972.pem
	I1217 19:59:18.431630  612025 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 17 19:32 /usr/share/ca-certificates/3757972.pem
	I1217 19:59:18.431695  612025 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3757972.pem
	I1217 19:59:18.473630  612025 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1217 19:59:18.484003  612025 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/3757972.pem /etc/ssl/certs/3ec20f2e.0
	I1217 19:59:18.493785  612025 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1217 19:59:18.498460  612025 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1217 19:59:18.498535  612025 kubeadm.go:401] StartCluster: {Name:old-k8s-version-894575 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-894575 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwar
ePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 19:59:18.498639  612025 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1217 19:59:18.498702  612025 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 19:59:18.530102  612025 cri.go:89] found id: ""
	I1217 19:59:18.530180  612025 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1217 19:59:18.539061  612025 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1217 19:59:18.547581  612025 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1217 19:59:18.547648  612025 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1217 19:59:18.555764  612025 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1217 19:59:18.555783  612025 kubeadm.go:158] found existing configuration files:
	
	I1217 19:59:18.555826  612025 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1217 19:59:18.563928  612025 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1217 19:59:18.563991  612025 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1217 19:59:18.571776  612025 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1217 19:59:18.580768  612025 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1217 19:59:18.580821  612025 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1217 19:59:18.588420  612025 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1217 19:59:18.596488  612025 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1217 19:59:18.596553  612025 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1217 19:59:18.604274  612025 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1217 19:59:18.614398  612025 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1217 19:59:18.614466  612025 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1217 19:59:18.625485  612025 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.28.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1217 19:59:18.725267  612025 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1045-gcp\n", err: exit status 1
	I1217 19:59:18.811521  612025 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1217 19:59:17.868446  613002 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.6.6-0: (1.462713588s)
	I1217 19:59:17.868476  613002 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22186-372245/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.6-0 from cache
	I1217 19:59:17.868503  613002 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.35.0-rc.1
	I1217 19:59:17.868563  613002 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.35.0-rc.1
	I1217 19:59:17.868504  613002 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.462771195s)
	I1217 19:59:17.868667  613002 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1217 19:59:19.108520  613002 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.35.0-rc.1: (1.239930447s)
	I1217 19:59:19.108554  613002 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22186-372245/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.35.0-rc.1 from cache
	I1217 19:59:19.108581  613002 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.13.1
	I1217 19:59:19.108581  613002 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.239887056s)
	I1217 19:59:19.108648  613002 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.13.1
	I1217 19:59:19.108653  613002 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1217 19:59:20.379356  613002 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.270666145s)
	I1217 19:59:20.379428  613002 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/22186-372245/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1217 19:59:20.379419  613002 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.13.1: (1.270743543s)
	I1217 19:59:20.379456  613002 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22186-372245/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.13.1 from cache
	I1217 19:59:20.379487  613002 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.35.0-rc.1
	I1217 19:59:20.379525  613002 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1217 19:59:20.379534  613002 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.35.0-rc.1
	I1217 19:59:20.384885  613002 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1217 19:59:20.384925  613002 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (9060352 bytes)
	I1217 19:59:21.836993  613002 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.35.0-rc.1: (1.457430222s)
	I1217 19:59:21.837026  613002 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22186-372245/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.35.0-rc.1 from cache
	I1217 19:59:21.837058  613002 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.35.0-rc.1
	I1217 19:59:21.837118  613002 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.35.0-rc.1
	I1217 19:59:17.466359  596882 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1217 19:59:17.466433  596882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 19:59:17.466488  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 19:59:17.497770  596882 cri.go:89] found id: "1ca89ebbb5613d16c13191bb7866cf9662b334b933e82c6860753473e8e2060b"
	I1217 19:59:17.497799  596882 cri.go:89] found id: "3d49292041fdf8c24ada2dbeb1467162d5310c3e0e8d23eefb19d520df32baab"
	I1217 19:59:17.497806  596882 cri.go:89] found id: ""
	I1217 19:59:17.497817  596882 logs.go:282] 2 containers: [1ca89ebbb5613d16c13191bb7866cf9662b334b933e82c6860753473e8e2060b 3d49292041fdf8c24ada2dbeb1467162d5310c3e0e8d23eefb19d520df32baab]
	I1217 19:59:17.497913  596882 ssh_runner.go:195] Run: which crictl
	I1217 19:59:17.502819  596882 ssh_runner.go:195] Run: which crictl
	I1217 19:59:17.506861  596882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 19:59:17.506938  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 19:59:17.539207  596882 cri.go:89] found id: ""
	I1217 19:59:17.539240  596882 logs.go:282] 0 containers: []
	W1217 19:59:17.539252  596882 logs.go:284] No container was found matching "etcd"
	I1217 19:59:17.539260  596882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 19:59:17.539327  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 19:59:17.570765  596882 cri.go:89] found id: ""
	I1217 19:59:17.570805  596882 logs.go:282] 0 containers: []
	W1217 19:59:17.570816  596882 logs.go:284] No container was found matching "coredns"
	I1217 19:59:17.570824  596882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 19:59:17.570893  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 19:59:17.604315  596882 cri.go:89] found id: "26afbca819064c614a7c269e4fbe3f73beb12920c9989c7a9adca8a87b8aee29"
	I1217 19:59:17.604338  596882 cri.go:89] found id: ""
	I1217 19:59:17.604347  596882 logs.go:282] 1 containers: [26afbca819064c614a7c269e4fbe3f73beb12920c9989c7a9adca8a87b8aee29]
	I1217 19:59:17.604425  596882 ssh_runner.go:195] Run: which crictl
	I1217 19:59:17.608662  596882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 19:59:17.608732  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 19:59:17.647593  596882 cri.go:89] found id: ""
	I1217 19:59:17.647643  596882 logs.go:282] 0 containers: []
	W1217 19:59:17.647655  596882 logs.go:284] No container was found matching "kube-proxy"
	I1217 19:59:17.647663  596882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 19:59:17.647743  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 19:59:17.684782  596882 cri.go:89] found id: "96d62cc516271a9229ae697d73c68f44ce2135124f2d88371c0189bb8de307fb"
	I1217 19:59:17.684811  596882 cri.go:89] found id: "4b0f0a789e86f48749beab0ed9a0b53d648eb2b29f2ba5276fc180b68b6b60a0"
	I1217 19:59:17.684817  596882 cri.go:89] found id: ""
	I1217 19:59:17.684828  596882 logs.go:282] 2 containers: [96d62cc516271a9229ae697d73c68f44ce2135124f2d88371c0189bb8de307fb 4b0f0a789e86f48749beab0ed9a0b53d648eb2b29f2ba5276fc180b68b6b60a0]
	I1217 19:59:17.684888  596882 ssh_runner.go:195] Run: which crictl
	I1217 19:59:17.689673  596882 ssh_runner.go:195] Run: which crictl
	I1217 19:59:17.694157  596882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 19:59:17.694216  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 19:59:17.726874  596882 cri.go:89] found id: ""
	I1217 19:59:17.726907  596882 logs.go:282] 0 containers: []
	W1217 19:59:17.726920  596882 logs.go:284] No container was found matching "kindnet"
	I1217 19:59:17.726928  596882 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1217 19:59:17.726987  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 19:59:17.763396  596882 cri.go:89] found id: ""
	I1217 19:59:17.763430  596882 logs.go:282] 0 containers: []
	W1217 19:59:17.763444  596882 logs.go:284] No container was found matching "storage-provisioner"
	I1217 19:59:17.763469  596882 logs.go:123] Gathering logs for kube-controller-manager [96d62cc516271a9229ae697d73c68f44ce2135124f2d88371c0189bb8de307fb] ...
	I1217 19:59:17.763492  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 96d62cc516271a9229ae697d73c68f44ce2135124f2d88371c0189bb8de307fb"
	I1217 19:59:17.803831  596882 logs.go:123] Gathering logs for kube-controller-manager [4b0f0a789e86f48749beab0ed9a0b53d648eb2b29f2ba5276fc180b68b6b60a0] ...
	I1217 19:59:17.803865  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 4b0f0a789e86f48749beab0ed9a0b53d648eb2b29f2ba5276fc180b68b6b60a0"
	I1217 19:59:17.837649  596882 logs.go:123] Gathering logs for CRI-O ...
	I1217 19:59:17.837680  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 19:59:17.878291  596882 logs.go:123] Gathering logs for dmesg ...
	I1217 19:59:17.878342  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 19:59:17.898217  596882 logs.go:123] Gathering logs for describe nodes ...
	I1217 19:59:17.898248  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1217 19:59:23.424852  613002 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.35.0-rc.1: (1.587705503s)
	I1217 19:59:23.424886  613002 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22186-372245/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.35.0-rc.1 from cache
	I1217 19:59:23.424919  613002 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1217 19:59:23.424969  613002 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1217 19:59:24.032178  613002 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/22186-372245/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1217 19:59:24.032233  613002 cache_images.go:125] Successfully loaded all cached images
	I1217 19:59:24.032242  613002 cache_images.go:94] duration metric: took 9.829729547s to LoadCachedImages
	I1217 19:59:24.032259  613002 kubeadm.go:935] updating node { 192.168.103.2 8443 v1.35.0-rc.1 crio true true} ...
	I1217 19:59:24.032379  613002 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-rc.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-832842 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-rc.1 ClusterName:no-preload-832842 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1217 19:59:24.032472  613002 ssh_runner.go:195] Run: crio config
	I1217 19:59:24.083634  613002 cni.go:84] Creating CNI manager for ""
	I1217 19:59:24.083658  613002 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1217 19:59:24.083675  613002 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1217 19:59:24.083699  613002 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.35.0-rc.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-832842 NodeName:no-preload-832842 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPa
th:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1217 19:59:24.083817  613002 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-832842"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-rc.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1217 19:59:24.083880  613002 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-rc.1
	I1217 19:59:24.092758  613002 binaries.go:54] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.35.0-rc.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.35.0-rc.1': No such file or directory
	
	Initiating transfer...
	I1217 19:59:24.092818  613002 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.35.0-rc.1
	I1217 19:59:24.101629  613002 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/amd64/kubectl.sha256
	I1217 19:59:24.101711  613002 download.go:108] Downloading: https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/22186-372245/.minikube/cache/linux/amd64/v1.35.0-rc.1/kubelet
	I1217 19:59:24.101731  613002 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl
	I1217 19:59:24.101760  613002 download.go:108] Downloading: https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-rc.1/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/22186-372245/.minikube/cache/linux/amd64/v1.35.0-rc.1/kubeadm
	I1217 19:59:24.105867  613002 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-rc.1/kubectl': No such file or directory
	I1217 19:59:24.105898  613002 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/cache/linux/amd64/v1.35.0-rc.1/kubectl --> /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl (58597560 bytes)
	I1217 19:59:24.929276  613002 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 19:59:24.943995  613002 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-rc.1/kubelet
	I1217 19:59:24.948826  613002 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-rc.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-rc.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-rc.1/kubelet': No such file or directory
	I1217 19:59:24.948861  613002 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/cache/linux/amd64/v1.35.0-rc.1/kubelet --> /var/lib/minikube/binaries/v1.35.0-rc.1/kubelet (58110244 bytes)
	I1217 19:59:25.112743  613002 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-rc.1/kubeadm
	I1217 19:59:25.117503  613002 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-rc.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-rc.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-rc.1/kubeadm': No such file or directory
	I1217 19:59:25.117532  613002 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/cache/linux/amd64/v1.35.0-rc.1/kubeadm --> /var/lib/minikube/binaries/v1.35.0-rc.1/kubeadm (72368312 bytes)
	I1217 19:59:25.293842  613002 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1217 19:59:25.316907  613002 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (373 bytes)
	I1217 19:59:25.337191  613002 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1217 19:59:25.365368  613002 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2221 bytes)
	I1217 19:59:25.380129  613002 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1217 19:59:25.384754  613002 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 19:59:25.396562  613002 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 19:59:25.509650  613002 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 19:59:25.534570  613002 certs.go:69] Setting up /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/no-preload-832842 for IP: 192.168.103.2
	I1217 19:59:25.534596  613002 certs.go:195] generating shared ca certs ...
	I1217 19:59:25.534617  613002 certs.go:227] acquiring lock for ca certs: {Name:mk6c0a4a99609de13fb0b54aca94f9165cc7856c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 19:59:25.534810  613002 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22186-372245/.minikube/ca.key
	I1217 19:59:25.534885  613002 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22186-372245/.minikube/proxy-client-ca.key
	I1217 19:59:25.534902  613002 certs.go:257] generating profile certs ...
	I1217 19:59:25.534978  613002 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/no-preload-832842/client.key
	I1217 19:59:25.535000  613002 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/no-preload-832842/client.crt with IP's: []
	I1217 19:59:25.592742  613002 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/no-preload-832842/client.crt ...
	I1217 19:59:25.592778  613002 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/no-preload-832842/client.crt: {Name:mk42486369f77e221c9aab49a651e94775b7bae1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 19:59:25.593012  613002 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/no-preload-832842/client.key ...
	I1217 19:59:25.593039  613002 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/no-preload-832842/client.key: {Name:mk89c16ef2c5a56a360da970a678076a4bb4c340 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 19:59:25.593230  613002 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/no-preload-832842/apiserver.key.234a7b62
	I1217 19:59:25.593253  613002 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/no-preload-832842/apiserver.crt.234a7b62 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.103.2]
	I1217 19:59:25.631586  613002 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/no-preload-832842/apiserver.crt.234a7b62 ...
	I1217 19:59:25.631615  613002 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/no-preload-832842/apiserver.crt.234a7b62: {Name:mk54f8ae2cd91472c7364e13c057e39714727a19 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 19:59:25.631793  613002 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/no-preload-832842/apiserver.key.234a7b62 ...
	I1217 19:59:25.631811  613002 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/no-preload-832842/apiserver.key.234a7b62: {Name:mk3c8685339dd7678c908188a708a038b54e0f45 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 19:59:25.631912  613002 certs.go:382] copying /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/no-preload-832842/apiserver.crt.234a7b62 -> /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/no-preload-832842/apiserver.crt
	I1217 19:59:25.632002  613002 certs.go:386] copying /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/no-preload-832842/apiserver.key.234a7b62 -> /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/no-preload-832842/apiserver.key
	I1217 19:59:25.632086  613002 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/no-preload-832842/proxy-client.key
	I1217 19:59:25.632106  613002 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/no-preload-832842/proxy-client.crt with IP's: []
	I1217 19:59:25.672169  613002 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/no-preload-832842/proxy-client.crt ...
	I1217 19:59:25.672202  613002 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/no-preload-832842/proxy-client.crt: {Name:mk3db8ab41e551f463725ce6bc26b39897c9471f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 19:59:25.672387  613002 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/no-preload-832842/proxy-client.key ...
	I1217 19:59:25.672403  613002 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/no-preload-832842/proxy-client.key: {Name:mkf5e02869cc5cc7bbc69664349cd424c4d4dc44 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 19:59:25.672589  613002 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-372245/.minikube/certs/375797.pem (1338 bytes)
	W1217 19:59:25.672628  613002 certs.go:480] ignoring /home/jenkins/minikube-integration/22186-372245/.minikube/certs/375797_empty.pem, impossibly tiny 0 bytes
	I1217 19:59:25.672638  613002 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-372245/.minikube/certs/ca-key.pem (1675 bytes)
	I1217 19:59:25.672661  613002 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-372245/.minikube/certs/ca.pem (1082 bytes)
	I1217 19:59:25.672689  613002 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-372245/.minikube/certs/cert.pem (1123 bytes)
	I1217 19:59:25.672719  613002 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-372245/.minikube/certs/key.pem (1675 bytes)
	I1217 19:59:25.672773  613002 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-372245/.minikube/files/etc/ssl/certs/3757972.pem (1708 bytes)
	I1217 19:59:25.673485  613002 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1217 19:59:25.692798  613002 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1217 19:59:25.710914  613002 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1217 19:59:25.730368  613002 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1217 19:59:25.747899  613002 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/no-preload-832842/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1217 19:59:25.765372  613002 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/no-preload-832842/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1217 19:59:25.784884  613002 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/no-preload-832842/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1217 19:59:25.805128  613002 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/no-preload-832842/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1217 19:59:25.823250  613002 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 19:59:25.843373  613002 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/certs/375797.pem --> /usr/share/ca-certificates/375797.pem (1338 bytes)
	I1217 19:59:25.861323  613002 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/files/etc/ssl/certs/3757972.pem --> /usr/share/ca-certificates/3757972.pem (1708 bytes)
	I1217 19:59:25.880401  613002 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1217 19:59:25.893860  613002 ssh_runner.go:195] Run: openssl version
	I1217 19:59:25.900269  613002 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/375797.pem
	I1217 19:59:25.908747  613002 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/375797.pem /etc/ssl/certs/375797.pem
	I1217 19:59:25.917382  613002 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/375797.pem
	I1217 19:59:25.921768  613002 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 17 19:32 /usr/share/ca-certificates/375797.pem
	I1217 19:59:25.921847  613002 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/375797.pem
	I1217 19:59:25.957212  613002 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1217 19:59:25.965655  613002 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/375797.pem /etc/ssl/certs/51391683.0
	I1217 19:59:25.973751  613002 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3757972.pem
	I1217 19:59:25.981672  613002 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3757972.pem /etc/ssl/certs/3757972.pem
	I1217 19:59:25.990021  613002 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3757972.pem
	I1217 19:59:25.994127  613002 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 17 19:32 /usr/share/ca-certificates/3757972.pem
	I1217 19:59:25.994191  613002 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3757972.pem
	I1217 19:59:26.031902  613002 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1217 19:59:26.041410  613002 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/3757972.pem /etc/ssl/certs/3ec20f2e.0
	I1217 19:59:26.050131  613002 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1217 19:59:26.058752  613002 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1217 19:59:26.066986  613002 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1217 19:59:26.071354  613002 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 19:24 /usr/share/ca-certificates/minikubeCA.pem
	I1217 19:59:26.071419  613002 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1217 19:59:26.107649  613002 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1217 19:59:26.116547  613002 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1217 19:59:26.126087  613002 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1217 19:59:26.130159  613002 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1217 19:59:26.130249  613002 kubeadm.go:401] StartCluster: {Name:no-preload-832842 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:no-preload-832842 Namespace:default APIServerHAVIP: APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 19:59:26.130334  613002 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1217 19:59:26.130402  613002 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 19:59:26.159136  613002 cri.go:89] found id: ""
	I1217 19:59:26.159224  613002 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1217 19:59:26.168262  613002 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1217 19:59:26.177121  613002 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1217 19:59:26.177188  613002 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1217 19:59:26.185838  613002 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1217 19:59:26.185870  613002 kubeadm.go:158] found existing configuration files:
	
	I1217 19:59:26.185943  613002 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1217 19:59:26.194347  613002 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1217 19:59:26.194404  613002 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1217 19:59:26.202469  613002 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1217 19:59:26.211105  613002 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1217 19:59:26.211183  613002 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1217 19:59:26.219223  613002 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1217 19:59:26.228072  613002 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1217 19:59:26.228173  613002 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1217 19:59:26.239165  613002 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1217 19:59:26.248476  613002 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1217 19:59:26.248547  613002 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1217 19:59:26.256813  613002 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1217 19:59:26.294070  613002 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-rc.1
	I1217 19:59:26.294177  613002 kubeadm.go:319] [preflight] Running pre-flight checks
	I1217 19:59:26.376698  613002 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1217 19:59:26.376790  613002 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1045-gcp
	I1217 19:59:26.376837  613002 kubeadm.go:319] OS: Linux
	I1217 19:59:26.376911  613002 kubeadm.go:319] CGROUPS_CPU: enabled
	I1217 19:59:26.376955  613002 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1217 19:59:26.377049  613002 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1217 19:59:26.377171  613002 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1217 19:59:26.377236  613002 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1217 19:59:26.377301  613002 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1217 19:59:26.377374  613002 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1217 19:59:26.377457  613002 kubeadm.go:319] CGROUPS_IO: enabled
	I1217 19:59:26.440855  613002 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1217 19:59:26.441052  613002 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1217 19:59:26.441214  613002 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1217 19:59:26.456663  613002 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1217 19:59:26.460625  613002 out.go:252]   - Generating certificates and keys ...
	I1217 19:59:26.460785  613002 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1217 19:59:26.460927  613002 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1217 19:59:26.512989  613002 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1217 19:59:26.713021  613002 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1217 19:59:26.766707  613002 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1217 19:59:26.797181  613002 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1217 19:59:26.899298  613002 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1217 19:59:26.899503  613002 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost no-preload-832842] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1217 19:59:27.019263  613002 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1217 19:59:27.019477  613002 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-832842] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1217 19:59:27.062872  613002 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1217 19:59:27.094655  613002 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1217 19:59:27.159114  613002 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1217 19:59:27.159215  613002 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1217 19:59:27.243797  613002 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1217 19:59:27.325737  613002 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1217 19:59:27.389284  613002 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1217 19:59:27.532504  613002 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1217 19:59:27.584726  613002 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1217 19:59:27.585277  613002 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1217 19:59:27.589779  613002 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1217 19:59:28.530940  612025 kubeadm.go:319] [init] Using Kubernetes version: v1.28.0
	I1217 19:59:28.531027  612025 kubeadm.go:319] [preflight] Running pre-flight checks
	I1217 19:59:28.531186  612025 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1217 19:59:28.531251  612025 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1045-gcp
	I1217 19:59:28.531295  612025 kubeadm.go:319] OS: Linux
	I1217 19:59:28.531349  612025 kubeadm.go:319] CGROUPS_CPU: enabled
	I1217 19:59:28.531403  612025 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1217 19:59:28.531461  612025 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1217 19:59:28.531516  612025 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1217 19:59:28.531571  612025 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1217 19:59:28.531630  612025 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1217 19:59:28.531694  612025 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1217 19:59:28.531742  612025 kubeadm.go:319] CGROUPS_IO: enabled
	I1217 19:59:28.531831  612025 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1217 19:59:28.531950  612025 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1217 19:59:28.532057  612025 kubeadm.go:319] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1217 19:59:28.532448  612025 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1217 19:59:28.534660  612025 out.go:252]   - Generating certificates and keys ...
	I1217 19:59:28.534908  612025 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1217 19:59:28.535120  612025 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1217 19:59:28.535300  612025 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1217 19:59:28.535430  612025 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1217 19:59:28.535603  612025 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1217 19:59:28.535760  612025 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1217 19:59:28.535912  612025 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1217 19:59:28.536213  612025 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-894575] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1217 19:59:28.536375  612025 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1217 19:59:28.536687  612025 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-894575] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1217 19:59:28.536865  612025 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1217 19:59:28.536949  612025 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1217 19:59:28.537002  612025 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1217 19:59:28.537089  612025 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1217 19:59:28.537156  612025 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1217 19:59:28.537229  612025 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1217 19:59:28.537309  612025 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1217 19:59:28.537390  612025 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1217 19:59:28.537541  612025 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1217 19:59:28.537714  612025 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1217 19:59:28.538921  612025 out.go:252]   - Booting up control plane ...
	I1217 19:59:28.539069  612025 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1217 19:59:28.539215  612025 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1217 19:59:28.539310  612025 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1217 19:59:28.539456  612025 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1217 19:59:28.539577  612025 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1217 19:59:28.539625  612025 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1217 19:59:28.539867  612025 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1217 19:59:28.540020  612025 kubeadm.go:319] [apiclient] All control plane components are healthy after 5.502314 seconds
	I1217 19:59:28.540208  612025 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1217 19:59:28.540431  612025 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1217 19:59:28.540553  612025 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1217 19:59:28.540839  612025 kubeadm.go:319] [mark-control-plane] Marking the node old-k8s-version-894575 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1217 19:59:28.540930  612025 kubeadm.go:319] [bootstrap-token] Using token: 8u44gz.h67xjev6iuf0hv1v
	I1217 19:59:28.542359  612025 out.go:252]   - Configuring RBAC rules ...
	I1217 19:59:28.542565  612025 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1217 19:59:28.542716  612025 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1217 19:59:28.542912  612025 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1217 19:59:28.543102  612025 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1217 19:59:28.543254  612025 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1217 19:59:28.543379  612025 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1217 19:59:28.543553  612025 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1217 19:59:28.543622  612025 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1217 19:59:28.543684  612025 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1217 19:59:28.543693  612025 kubeadm.go:319] 
	I1217 19:59:28.543770  612025 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1217 19:59:28.543779  612025 kubeadm.go:319] 
	I1217 19:59:28.543877  612025 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1217 19:59:28.543883  612025 kubeadm.go:319] 
	I1217 19:59:28.543921  612025 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1217 19:59:28.544010  612025 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1217 19:59:28.544122  612025 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1217 19:59:28.544133  612025 kubeadm.go:319] 
	I1217 19:59:28.544213  612025 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1217 19:59:28.544222  612025 kubeadm.go:319] 
	I1217 19:59:28.544286  612025 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1217 19:59:28.544295  612025 kubeadm.go:319] 
	I1217 19:59:28.544369  612025 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1217 19:59:28.544491  612025 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1217 19:59:28.544575  612025 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1217 19:59:28.544582  612025 kubeadm.go:319] 
	I1217 19:59:28.544655  612025 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1217 19:59:28.544722  612025 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1217 19:59:28.544727  612025 kubeadm.go:319] 
	I1217 19:59:28.544796  612025 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 8u44gz.h67xjev6iuf0hv1v \
	I1217 19:59:28.544926  612025 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:8ef867ecc15c7bd9eb9f87ba84e4b5e1f9c90bbe1fbebab60bd7b5b08cd9129f \
	I1217 19:59:28.544958  612025 kubeadm.go:319] 	--control-plane 
	I1217 19:59:28.544963  612025 kubeadm.go:319] 
	I1217 19:59:28.545063  612025 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1217 19:59:28.545073  612025 kubeadm.go:319] 
	I1217 19:59:28.545214  612025 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 8u44gz.h67xjev6iuf0hv1v \
	I1217 19:59:28.545415  612025 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:8ef867ecc15c7bd9eb9f87ba84e4b5e1f9c90bbe1fbebab60bd7b5b08cd9129f 
	I1217 19:59:28.545431  612025 cni.go:84] Creating CNI manager for ""
	I1217 19:59:28.545444  612025 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1217 19:59:28.546893  612025 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1217 19:59:28.548280  612025 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1217 19:59:28.553989  612025 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.0/kubectl ...
	I1217 19:59:28.554009  612025 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2620 bytes)
	I1217 19:59:28.569060  612025 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1217 19:59:29.291176  612025 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1217 19:59:29.291251  612025 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 19:59:29.291333  612025 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes old-k8s-version-894575 minikube.k8s.io/updated_at=2025_12_17T19_59_29_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=2e96f676eb7e96389e85fe0658a4ede4c4ba6924 minikube.k8s.io/name=old-k8s-version-894575 minikube.k8s.io/primary=true
	I1217 19:59:29.302379  612025 ops.go:34] apiserver oom_adj: -16
	I1217 19:59:29.382819  612025 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 19:59:27.591559  613002 out.go:252]   - Booting up control plane ...
	I1217 19:59:27.591699  613002 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1217 19:59:27.591811  613002 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1217 19:59:27.592487  613002 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1217 19:59:27.607253  613002 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1217 19:59:27.607348  613002 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1217 19:59:27.614185  613002 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1217 19:59:27.614297  613002 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1217 19:59:27.614343  613002 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1217 19:59:27.730105  613002 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1217 19:59:27.730294  613002 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1217 19:59:28.231405  613002 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 501.49573ms
	I1217 19:59:28.234412  613002 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1217 19:59:28.234554  613002 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.103.2:8443/livez
	I1217 19:59:28.234710  613002 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1217 19:59:28.234829  613002 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1217 19:59:29.239954  613002 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.005410296s
	I1217 19:59:29.879999  613002 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 1.645384574s
	I1217 19:59:31.735880  613002 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 3.501435288s
	I1217 19:59:31.753036  613002 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1217 19:59:31.764161  613002 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1217 19:59:31.773029  613002 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1217 19:59:31.773377  613002 kubeadm.go:319] [mark-control-plane] Marking the node no-preload-832842 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1217 19:59:31.780709  613002 kubeadm.go:319] [bootstrap-token] Using token: s9jxxo.i2acucfjf8euorlv
	I1217 19:59:31.782172  613002 out.go:252]   - Configuring RBAC rules ...
	I1217 19:59:31.782360  613002 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1217 19:59:31.785445  613002 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1217 19:59:31.791153  613002 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1217 19:59:31.793759  613002 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1217 19:59:31.796294  613002 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1217 19:59:31.798768  613002 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1217 19:59:32.141800  613002 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1217 19:59:28.259205  596882 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (10.360921949s)
	W1217 19:59:28.259284  596882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Get "https://localhost:8443/api/v1/nodes?limit=500": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:48742->[::1]:8443: read: connection reset by peer
	 output: 
	** stderr ** 
	Get "https://localhost:8443/api/v1/nodes?limit=500": dial tcp [::1]:8443: connect: connection refused - error from a previous attempt: read tcp [::1]:48742->[::1]:8443: read: connection reset by peer
	
	** /stderr **
	I1217 19:59:28.259300  596882 logs.go:123] Gathering logs for kube-apiserver [1ca89ebbb5613d16c13191bb7866cf9662b334b933e82c6860753473e8e2060b] ...
	I1217 19:59:28.259321  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1ca89ebbb5613d16c13191bb7866cf9662b334b933e82c6860753473e8e2060b"
	I1217 19:59:28.295803  596882 logs.go:123] Gathering logs for kube-apiserver [3d49292041fdf8c24ada2dbeb1467162d5310c3e0e8d23eefb19d520df32baab] ...
	I1217 19:59:28.295840  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3d49292041fdf8c24ada2dbeb1467162d5310c3e0e8d23eefb19d520df32baab"
	W1217 19:59:28.326266  596882 logs.go:130] failed kube-apiserver [3d49292041fdf8c24ada2dbeb1467162d5310c3e0e8d23eefb19d520df32baab]: command: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3d49292041fdf8c24ada2dbeb1467162d5310c3e0e8d23eefb19d520df32baab" /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 3d49292041fdf8c24ada2dbeb1467162d5310c3e0e8d23eefb19d520df32baab": Process exited with status 1
	stdout:
	
	stderr:
	E1217 19:59:28.323499    1306 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3d49292041fdf8c24ada2dbeb1467162d5310c3e0e8d23eefb19d520df32baab\": container with ID starting with 3d49292041fdf8c24ada2dbeb1467162d5310c3e0e8d23eefb19d520df32baab not found: ID does not exist" containerID="3d49292041fdf8c24ada2dbeb1467162d5310c3e0e8d23eefb19d520df32baab"
	time="2025-12-17T19:59:28Z" level=fatal msg="rpc error: code = NotFound desc = could not find container \"3d49292041fdf8c24ada2dbeb1467162d5310c3e0e8d23eefb19d520df32baab\": container with ID starting with 3d49292041fdf8c24ada2dbeb1467162d5310c3e0e8d23eefb19d520df32baab not found: ID does not exist"
	 output: 
	** stderr ** 
	E1217 19:59:28.323499    1306 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3d49292041fdf8c24ada2dbeb1467162d5310c3e0e8d23eefb19d520df32baab\": container with ID starting with 3d49292041fdf8c24ada2dbeb1467162d5310c3e0e8d23eefb19d520df32baab not found: ID does not exist" containerID="3d49292041fdf8c24ada2dbeb1467162d5310c3e0e8d23eefb19d520df32baab"
	time="2025-12-17T19:59:28Z" level=fatal msg="rpc error: code = NotFound desc = could not find container \"3d49292041fdf8c24ada2dbeb1467162d5310c3e0e8d23eefb19d520df32baab\": container with ID starting with 3d49292041fdf8c24ada2dbeb1467162d5310c3e0e8d23eefb19d520df32baab not found: ID does not exist"
	
	** /stderr **
	I1217 19:59:28.326292  596882 logs.go:123] Gathering logs for container status ...
	I1217 19:59:28.326309  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 19:59:28.366825  596882 logs.go:123] Gathering logs for kubelet ...
	I1217 19:59:28.366865  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 19:59:28.471908  596882 logs.go:123] Gathering logs for kube-scheduler [26afbca819064c614a7c269e4fbe3f73beb12920c9989c7a9adca8a87b8aee29] ...
	I1217 19:59:28.471951  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 26afbca819064c614a7c269e4fbe3f73beb12920c9989c7a9adca8a87b8aee29"
	I1217 19:59:31.012651  596882 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1217 19:59:31.013226  596882 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 19:59:31.013285  596882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 19:59:31.013339  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 19:59:31.046918  596882 cri.go:89] found id: "1ca89ebbb5613d16c13191bb7866cf9662b334b933e82c6860753473e8e2060b"
	I1217 19:59:31.046945  596882 cri.go:89] found id: ""
	I1217 19:59:31.046966  596882 logs.go:282] 1 containers: [1ca89ebbb5613d16c13191bb7866cf9662b334b933e82c6860753473e8e2060b]
	I1217 19:59:31.047034  596882 ssh_runner.go:195] Run: which crictl
	I1217 19:59:31.052032  596882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 19:59:31.052140  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 19:59:31.085479  596882 cri.go:89] found id: ""
	I1217 19:59:31.085512  596882 logs.go:282] 0 containers: []
	W1217 19:59:31.085524  596882 logs.go:284] No container was found matching "etcd"
	I1217 19:59:31.085532  596882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 19:59:31.085603  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 19:59:31.119059  596882 cri.go:89] found id: ""
	I1217 19:59:31.119109  596882 logs.go:282] 0 containers: []
	W1217 19:59:31.119123  596882 logs.go:284] No container was found matching "coredns"
	I1217 19:59:31.119133  596882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 19:59:31.119206  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 19:59:31.152363  596882 cri.go:89] found id: "26afbca819064c614a7c269e4fbe3f73beb12920c9989c7a9adca8a87b8aee29"
	I1217 19:59:31.152389  596882 cri.go:89] found id: ""
	I1217 19:59:31.152399  596882 logs.go:282] 1 containers: [26afbca819064c614a7c269e4fbe3f73beb12920c9989c7a9adca8a87b8aee29]
	I1217 19:59:31.152462  596882 ssh_runner.go:195] Run: which crictl
	I1217 19:59:31.156830  596882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 19:59:31.156924  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 19:59:31.189580  596882 cri.go:89] found id: ""
	I1217 19:59:31.189606  596882 logs.go:282] 0 containers: []
	W1217 19:59:31.189614  596882 logs.go:284] No container was found matching "kube-proxy"
	I1217 19:59:31.189620  596882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 19:59:31.189680  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 19:59:31.222872  596882 cri.go:89] found id: "96d62cc516271a9229ae697d73c68f44ce2135124f2d88371c0189bb8de307fb"
	I1217 19:59:31.222899  596882 cri.go:89] found id: ""
	I1217 19:59:31.222909  596882 logs.go:282] 1 containers: [96d62cc516271a9229ae697d73c68f44ce2135124f2d88371c0189bb8de307fb]
	I1217 19:59:31.222986  596882 ssh_runner.go:195] Run: which crictl
	I1217 19:59:31.227977  596882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 19:59:31.228058  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 19:59:31.259327  596882 cri.go:89] found id: ""
	I1217 19:59:31.259355  596882 logs.go:282] 0 containers: []
	W1217 19:59:31.259367  596882 logs.go:284] No container was found matching "kindnet"
	I1217 19:59:31.259374  596882 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1217 19:59:31.259440  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 19:59:31.294138  596882 cri.go:89] found id: ""
	I1217 19:59:31.294171  596882 logs.go:282] 0 containers: []
	W1217 19:59:31.294185  596882 logs.go:284] No container was found matching "storage-provisioner"
	I1217 19:59:31.294199  596882 logs.go:123] Gathering logs for container status ...
	I1217 19:59:31.294216  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 19:59:31.333068  596882 logs.go:123] Gathering logs for kubelet ...
	I1217 19:59:31.333120  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 19:59:31.406667  596882 logs.go:123] Gathering logs for dmesg ...
	I1217 19:59:31.406708  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 19:59:31.424229  596882 logs.go:123] Gathering logs for describe nodes ...
	I1217 19:59:31.424261  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 19:59:31.486747  596882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 19:59:31.486770  596882 logs.go:123] Gathering logs for kube-apiserver [1ca89ebbb5613d16c13191bb7866cf9662b334b933e82c6860753473e8e2060b] ...
	I1217 19:59:31.486789  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1ca89ebbb5613d16c13191bb7866cf9662b334b933e82c6860753473e8e2060b"
	I1217 19:59:31.517894  596882 logs.go:123] Gathering logs for kube-scheduler [26afbca819064c614a7c269e4fbe3f73beb12920c9989c7a9adca8a87b8aee29] ...
	I1217 19:59:31.517929  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 26afbca819064c614a7c269e4fbe3f73beb12920c9989c7a9adca8a87b8aee29"
	I1217 19:59:31.544828  596882 logs.go:123] Gathering logs for kube-controller-manager [96d62cc516271a9229ae697d73c68f44ce2135124f2d88371c0189bb8de307fb] ...
	I1217 19:59:31.544858  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 96d62cc516271a9229ae697d73c68f44ce2135124f2d88371c0189bb8de307fb"
	I1217 19:59:31.573646  596882 logs.go:123] Gathering logs for CRI-O ...
	I1217 19:59:31.573678  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 19:59:32.557686  613002 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1217 19:59:33.141329  613002 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1217 19:59:33.142187  613002 kubeadm.go:319] 
	I1217 19:59:33.142307  613002 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1217 19:59:33.142328  613002 kubeadm.go:319] 
	I1217 19:59:33.142444  613002 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1217 19:59:33.142459  613002 kubeadm.go:319] 
	I1217 19:59:33.142496  613002 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1217 19:59:33.142590  613002 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1217 19:59:33.142672  613002 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1217 19:59:33.142684  613002 kubeadm.go:319] 
	I1217 19:59:33.142771  613002 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1217 19:59:33.142780  613002 kubeadm.go:319] 
	I1217 19:59:33.142849  613002 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1217 19:59:33.142864  613002 kubeadm.go:319] 
	I1217 19:59:33.142917  613002 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1217 19:59:33.143011  613002 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1217 19:59:33.143159  613002 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1217 19:59:33.143172  613002 kubeadm.go:319] 
	I1217 19:59:33.143302  613002 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1217 19:59:33.143424  613002 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1217 19:59:33.143433  613002 kubeadm.go:319] 
	I1217 19:59:33.143554  613002 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token s9jxxo.i2acucfjf8euorlv \
	I1217 19:59:33.143715  613002 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:8ef867ecc15c7bd9eb9f87ba84e4b5e1f9c90bbe1fbebab60bd7b5b08cd9129f \
	I1217 19:59:33.143747  613002 kubeadm.go:319] 	--control-plane 
	I1217 19:59:33.143753  613002 kubeadm.go:319] 
	I1217 19:59:33.143883  613002 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1217 19:59:33.143893  613002 kubeadm.go:319] 
	I1217 19:59:33.143981  613002 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token s9jxxo.i2acucfjf8euorlv \
	I1217 19:59:33.144120  613002 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:8ef867ecc15c7bd9eb9f87ba84e4b5e1f9c90bbe1fbebab60bd7b5b08cd9129f 
	I1217 19:59:33.145337  613002 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1045-gcp\n", err: exit status 1
	I1217 19:59:33.145459  613002 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1217 19:59:33.145485  613002 cni.go:84] Creating CNI manager for ""
	I1217 19:59:33.145497  613002 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1217 19:59:33.148015  613002 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1217 19:59:29.883834  612025 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 19:59:30.383205  612025 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 19:59:30.883777  612025 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 19:59:31.383264  612025 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 19:59:31.883260  612025 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 19:59:32.383322  612025 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 19:59:32.883645  612025 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 19:59:33.383878  612025 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 19:59:33.883359  612025 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 19:59:34.383556  612025 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 19:59:33.149190  613002 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1217 19:59:33.153632  613002 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl ...
	I1217 19:59:33.153653  613002 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2620 bytes)
	I1217 19:59:33.167865  613002 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1217 19:59:33.374925  613002 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1217 19:59:33.374980  613002 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 19:59:33.375055  613002 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-832842 minikube.k8s.io/updated_at=2025_12_17T19_59_33_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=2e96f676eb7e96389e85fe0658a4ede4c4ba6924 minikube.k8s.io/name=no-preload-832842 minikube.k8s.io/primary=true
	I1217 19:59:33.477436  613002 ops.go:34] apiserver oom_adj: -16
	I1217 19:59:33.477467  613002 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 19:59:33.977957  613002 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 19:59:34.478305  613002 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 19:59:34.977579  613002 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 19:59:35.478330  613002 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 19:59:35.977695  613002 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 19:59:36.477648  613002 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 19:59:36.978019  613002 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 19:59:34.115468  596882 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1217 19:59:34.115895  596882 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 19:59:34.115949  596882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 19:59:34.116003  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 19:59:34.145723  596882 cri.go:89] found id: "1ca89ebbb5613d16c13191bb7866cf9662b334b933e82c6860753473e8e2060b"
	I1217 19:59:34.145746  596882 cri.go:89] found id: ""
	I1217 19:59:34.145756  596882 logs.go:282] 1 containers: [1ca89ebbb5613d16c13191bb7866cf9662b334b933e82c6860753473e8e2060b]
	I1217 19:59:34.145819  596882 ssh_runner.go:195] Run: which crictl
	I1217 19:59:34.149802  596882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 19:59:34.149857  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 19:59:34.177893  596882 cri.go:89] found id: ""
	I1217 19:59:34.177924  596882 logs.go:282] 0 containers: []
	W1217 19:59:34.177937  596882 logs.go:284] No container was found matching "etcd"
	I1217 19:59:34.177947  596882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 19:59:34.178007  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 19:59:34.204154  596882 cri.go:89] found id: ""
	I1217 19:59:34.204182  596882 logs.go:282] 0 containers: []
	W1217 19:59:34.204198  596882 logs.go:284] No container was found matching "coredns"
	I1217 19:59:34.204206  596882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 19:59:34.204281  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 19:59:34.233062  596882 cri.go:89] found id: "26afbca819064c614a7c269e4fbe3f73beb12920c9989c7a9adca8a87b8aee29"
	I1217 19:59:34.233098  596882 cri.go:89] found id: ""
	I1217 19:59:34.233111  596882 logs.go:282] 1 containers: [26afbca819064c614a7c269e4fbe3f73beb12920c9989c7a9adca8a87b8aee29]
	I1217 19:59:34.233166  596882 ssh_runner.go:195] Run: which crictl
	I1217 19:59:34.237113  596882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 19:59:34.237180  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 19:59:34.265676  596882 cri.go:89] found id: ""
	I1217 19:59:34.265702  596882 logs.go:282] 0 containers: []
	W1217 19:59:34.265713  596882 logs.go:284] No container was found matching "kube-proxy"
	I1217 19:59:34.265721  596882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 19:59:34.265780  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 19:59:34.295563  596882 cri.go:89] found id: "96d62cc516271a9229ae697d73c68f44ce2135124f2d88371c0189bb8de307fb"
	I1217 19:59:34.295590  596882 cri.go:89] found id: ""
	I1217 19:59:34.295600  596882 logs.go:282] 1 containers: [96d62cc516271a9229ae697d73c68f44ce2135124f2d88371c0189bb8de307fb]
	I1217 19:59:34.295666  596882 ssh_runner.go:195] Run: which crictl
	I1217 19:59:34.299895  596882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 19:59:34.299986  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 19:59:34.328557  596882 cri.go:89] found id: ""
	I1217 19:59:34.328581  596882 logs.go:282] 0 containers: []
	W1217 19:59:34.328589  596882 logs.go:284] No container was found matching "kindnet"
	I1217 19:59:34.328594  596882 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1217 19:59:34.328658  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 19:59:34.359293  596882 cri.go:89] found id: ""
	I1217 19:59:34.359322  596882 logs.go:282] 0 containers: []
	W1217 19:59:34.359334  596882 logs.go:284] No container was found matching "storage-provisioner"
	I1217 19:59:34.359344  596882 logs.go:123] Gathering logs for container status ...
	I1217 19:59:34.359356  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 19:59:34.394043  596882 logs.go:123] Gathering logs for kubelet ...
	I1217 19:59:34.394093  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 19:59:34.475765  596882 logs.go:123] Gathering logs for dmesg ...
	I1217 19:59:34.475802  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 19:59:34.494469  596882 logs.go:123] Gathering logs for describe nodes ...
	I1217 19:59:34.494507  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 19:59:34.559885  596882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 19:59:34.559911  596882 logs.go:123] Gathering logs for kube-apiserver [1ca89ebbb5613d16c13191bb7866cf9662b334b933e82c6860753473e8e2060b] ...
	I1217 19:59:34.559925  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1ca89ebbb5613d16c13191bb7866cf9662b334b933e82c6860753473e8e2060b"
	I1217 19:59:34.590687  596882 logs.go:123] Gathering logs for kube-scheduler [26afbca819064c614a7c269e4fbe3f73beb12920c9989c7a9adca8a87b8aee29] ...
	I1217 19:59:34.590722  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 26afbca819064c614a7c269e4fbe3f73beb12920c9989c7a9adca8a87b8aee29"
	I1217 19:59:34.621883  596882 logs.go:123] Gathering logs for kube-controller-manager [96d62cc516271a9229ae697d73c68f44ce2135124f2d88371c0189bb8de307fb] ...
	I1217 19:59:34.621923  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 96d62cc516271a9229ae697d73c68f44ce2135124f2d88371c0189bb8de307fb"
	I1217 19:59:34.650696  596882 logs.go:123] Gathering logs for CRI-O ...
	I1217 19:59:34.650723  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 19:59:37.191714  596882 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1217 19:59:37.192259  596882 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 19:59:37.192321  596882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 19:59:37.192391  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 19:59:37.221286  596882 cri.go:89] found id: "1ca89ebbb5613d16c13191bb7866cf9662b334b933e82c6860753473e8e2060b"
	I1217 19:59:37.221311  596882 cri.go:89] found id: ""
	I1217 19:59:37.221322  596882 logs.go:282] 1 containers: [1ca89ebbb5613d16c13191bb7866cf9662b334b933e82c6860753473e8e2060b]
	I1217 19:59:37.221378  596882 ssh_runner.go:195] Run: which crictl
	I1217 19:59:37.225451  596882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 19:59:37.225517  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 19:59:37.253499  596882 cri.go:89] found id: ""
	I1217 19:59:37.253530  596882 logs.go:282] 0 containers: []
	W1217 19:59:37.253539  596882 logs.go:284] No container was found matching "etcd"
	I1217 19:59:37.253545  596882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 19:59:37.253594  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 19:59:37.478038  613002 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 19:59:37.557714  613002 kubeadm.go:1114] duration metric: took 4.182775928s to wait for elevateKubeSystemPrivileges
	I1217 19:59:37.557764  613002 kubeadm.go:403] duration metric: took 11.427536902s to StartCluster
	I1217 19:59:37.557788  613002 settings.go:142] acquiring lock: {Name:mk01c60672ff2b8f50b037d6096a0a4590636830 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 19:59:37.557873  613002 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22186-372245/kubeconfig
	I1217 19:59:37.559508  613002 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-372245/kubeconfig: {Name:mkbe8926b9014d2af611aee93b1188b72880b6c1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 19:59:37.559850  613002 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1217 19:59:37.560024  613002 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1217 19:59:37.560053  613002 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1217 19:59:37.560195  613002 addons.go:70] Setting storage-provisioner=true in profile "no-preload-832842"
	I1217 19:59:37.560243  613002 addons.go:239] Setting addon storage-provisioner=true in "no-preload-832842"
	I1217 19:59:37.560277  613002 host.go:66] Checking if "no-preload-832842" exists ...
	I1217 19:59:37.560274  613002 config.go:182] Loaded profile config "no-preload-832842": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1217 19:59:37.560332  613002 addons.go:70] Setting default-storageclass=true in profile "no-preload-832842"
	I1217 19:59:37.560368  613002 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-832842"
	I1217 19:59:37.560717  613002 cli_runner.go:164] Run: docker container inspect no-preload-832842 --format={{.State.Status}}
	I1217 19:59:37.560940  613002 cli_runner.go:164] Run: docker container inspect no-preload-832842 --format={{.State.Status}}
	I1217 19:59:37.563610  613002 out.go:179] * Verifying Kubernetes components...
	I1217 19:59:37.566021  613002 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 19:59:37.595287  613002 addons.go:239] Setting addon default-storageclass=true in "no-preload-832842"
	I1217 19:59:37.595453  613002 host.go:66] Checking if "no-preload-832842" exists ...
	I1217 19:59:37.595654  613002 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1217 19:59:37.596752  613002 cli_runner.go:164] Run: docker container inspect no-preload-832842 --format={{.State.Status}}
	I1217 19:59:37.598138  613002 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 19:59:37.598158  613002 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1217 19:59:37.598222  613002 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-832842
	I1217 19:59:37.634670  613002 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1217 19:59:37.634698  613002 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1217 19:59:37.634767  613002 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-832842
	I1217 19:59:37.636362  613002 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33438 SSHKeyPath:/home/jenkins/minikube-integration/22186-372245/.minikube/machines/no-preload-832842/id_rsa Username:docker}
	I1217 19:59:37.663482  613002 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33438 SSHKeyPath:/home/jenkins/minikube-integration/22186-372245/.minikube/machines/no-preload-832842/id_rsa Username:docker}
	I1217 19:59:37.709686  613002 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.103.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1217 19:59:37.748587  613002 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 19:59:37.774123  613002 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 19:59:37.805603  613002 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1217 19:59:37.895739  613002 start.go:977] {"host.minikube.internal": 192.168.103.1} host record injected into CoreDNS's ConfigMap
	I1217 19:59:37.897094  613002 node_ready.go:35] waiting up to 6m0s for node "no-preload-832842" to be "Ready" ...
	I1217 19:59:38.111071  613002 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1217 19:59:34.883298  612025 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 19:59:35.383802  612025 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 19:59:35.883833  612025 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 19:59:36.383846  612025 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 19:59:36.882981  612025 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 19:59:37.383781  612025 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 19:59:37.883281  612025 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 19:59:38.383909  612025 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 19:59:38.882975  612025 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 19:59:39.383488  612025 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 19:59:39.882955  612025 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 19:59:38.112333  613002 addons.go:530] duration metric: took 552.27323ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1217 19:59:38.400988  613002 kapi.go:214] "coredns" deployment in "kube-system" namespace and "no-preload-832842" context rescaled to 1 replicas
	W1217 19:59:39.900027  613002 node_ready.go:57] node "no-preload-832842" has "Ready":"False" status (will retry)
	W1217 19:59:41.900212  613002 node_ready.go:57] node "no-preload-832842" has "Ready":"False" status (will retry)
	I1217 19:59:37.283518  596882 cri.go:89] found id: ""
	I1217 19:59:37.283547  596882 logs.go:282] 0 containers: []
	W1217 19:59:37.283557  596882 logs.go:284] No container was found matching "coredns"
	I1217 19:59:37.283564  596882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 19:59:37.283628  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 19:59:37.315309  596882 cri.go:89] found id: "26afbca819064c614a7c269e4fbe3f73beb12920c9989c7a9adca8a87b8aee29"
	I1217 19:59:37.315342  596882 cri.go:89] found id: ""
	I1217 19:59:37.315353  596882 logs.go:282] 1 containers: [26afbca819064c614a7c269e4fbe3f73beb12920c9989c7a9adca8a87b8aee29]
	I1217 19:59:37.315430  596882 ssh_runner.go:195] Run: which crictl
	I1217 19:59:37.319898  596882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 19:59:37.319984  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 19:59:37.348269  596882 cri.go:89] found id: ""
	I1217 19:59:37.348296  596882 logs.go:282] 0 containers: []
	W1217 19:59:37.348305  596882 logs.go:284] No container was found matching "kube-proxy"
	I1217 19:59:37.348310  596882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 19:59:37.348360  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 19:59:37.377023  596882 cri.go:89] found id: "96d62cc516271a9229ae697d73c68f44ce2135124f2d88371c0189bb8de307fb"
	I1217 19:59:37.377048  596882 cri.go:89] found id: ""
	I1217 19:59:37.377059  596882 logs.go:282] 1 containers: [96d62cc516271a9229ae697d73c68f44ce2135124f2d88371c0189bb8de307fb]
	I1217 19:59:37.377144  596882 ssh_runner.go:195] Run: which crictl
	I1217 19:59:37.381442  596882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 19:59:37.381506  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 19:59:37.414133  596882 cri.go:89] found id: ""
	I1217 19:59:37.414184  596882 logs.go:282] 0 containers: []
	W1217 19:59:37.414197  596882 logs.go:284] No container was found matching "kindnet"
	I1217 19:59:37.414205  596882 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1217 19:59:37.414266  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 19:59:37.443819  596882 cri.go:89] found id: ""
	I1217 19:59:37.443851  596882 logs.go:282] 0 containers: []
	W1217 19:59:37.443863  596882 logs.go:284] No container was found matching "storage-provisioner"
	I1217 19:59:37.443876  596882 logs.go:123] Gathering logs for kubelet ...
	I1217 19:59:37.443891  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 19:59:37.522099  596882 logs.go:123] Gathering logs for dmesg ...
	I1217 19:59:37.522141  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 19:59:37.545338  596882 logs.go:123] Gathering logs for describe nodes ...
	I1217 19:59:37.545376  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 19:59:37.646905  596882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 19:59:37.646929  596882 logs.go:123] Gathering logs for kube-apiserver [1ca89ebbb5613d16c13191bb7866cf9662b334b933e82c6860753473e8e2060b] ...
	I1217 19:59:37.646944  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1ca89ebbb5613d16c13191bb7866cf9662b334b933e82c6860753473e8e2060b"
	I1217 19:59:37.704266  596882 logs.go:123] Gathering logs for kube-scheduler [26afbca819064c614a7c269e4fbe3f73beb12920c9989c7a9adca8a87b8aee29] ...
	I1217 19:59:37.704362  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 26afbca819064c614a7c269e4fbe3f73beb12920c9989c7a9adca8a87b8aee29"
	I1217 19:59:37.744589  596882 logs.go:123] Gathering logs for kube-controller-manager [96d62cc516271a9229ae697d73c68f44ce2135124f2d88371c0189bb8de307fb] ...
	I1217 19:59:37.744633  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 96d62cc516271a9229ae697d73c68f44ce2135124f2d88371c0189bb8de307fb"
	I1217 19:59:37.781022  596882 logs.go:123] Gathering logs for CRI-O ...
	I1217 19:59:37.781051  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 19:59:37.839983  596882 logs.go:123] Gathering logs for container status ...
	I1217 19:59:37.840030  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 19:59:40.383195  596882 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1217 19:59:40.383600  596882 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 19:59:40.383660  596882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 19:59:40.383740  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 19:59:40.416354  596882 cri.go:89] found id: "1ca89ebbb5613d16c13191bb7866cf9662b334b933e82c6860753473e8e2060b"
	I1217 19:59:40.416380  596882 cri.go:89] found id: ""
	I1217 19:59:40.416391  596882 logs.go:282] 1 containers: [1ca89ebbb5613d16c13191bb7866cf9662b334b933e82c6860753473e8e2060b]
	I1217 19:59:40.416468  596882 ssh_runner.go:195] Run: which crictl
	I1217 19:59:40.421551  596882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 19:59:40.421618  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 19:59:40.453001  596882 cri.go:89] found id: ""
	I1217 19:59:40.453026  596882 logs.go:282] 0 containers: []
	W1217 19:59:40.453035  596882 logs.go:284] No container was found matching "etcd"
	I1217 19:59:40.453040  596882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 19:59:40.453130  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 19:59:40.482820  596882 cri.go:89] found id: ""
	I1217 19:59:40.482849  596882 logs.go:282] 0 containers: []
	W1217 19:59:40.482860  596882 logs.go:284] No container was found matching "coredns"
	I1217 19:59:40.482868  596882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 19:59:40.482941  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 19:59:40.513059  596882 cri.go:89] found id: "26afbca819064c614a7c269e4fbe3f73beb12920c9989c7a9adca8a87b8aee29"
	I1217 19:59:40.513124  596882 cri.go:89] found id: ""
	I1217 19:59:40.513136  596882 logs.go:282] 1 containers: [26afbca819064c614a7c269e4fbe3f73beb12920c9989c7a9adca8a87b8aee29]
	I1217 19:59:40.513219  596882 ssh_runner.go:195] Run: which crictl
	I1217 19:59:40.517585  596882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 19:59:40.517647  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 19:59:40.547265  596882 cri.go:89] found id: ""
	I1217 19:59:40.547298  596882 logs.go:282] 0 containers: []
	W1217 19:59:40.547311  596882 logs.go:284] No container was found matching "kube-proxy"
	I1217 19:59:40.547319  596882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 19:59:40.547390  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 19:59:40.575327  596882 cri.go:89] found id: "1dee5fecff78a1a61126f20ed261adbf0b690830e4ecf50ef50f99d3aaad09cb"
	I1217 19:59:40.575350  596882 cri.go:89] found id: "96d62cc516271a9229ae697d73c68f44ce2135124f2d88371c0189bb8de307fb"
	I1217 19:59:40.575353  596882 cri.go:89] found id: ""
	I1217 19:59:40.575362  596882 logs.go:282] 2 containers: [1dee5fecff78a1a61126f20ed261adbf0b690830e4ecf50ef50f99d3aaad09cb 96d62cc516271a9229ae697d73c68f44ce2135124f2d88371c0189bb8de307fb]
	I1217 19:59:40.575428  596882 ssh_runner.go:195] Run: which crictl
	I1217 19:59:40.579702  596882 ssh_runner.go:195] Run: which crictl
	I1217 19:59:40.583810  596882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 19:59:40.583895  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 19:59:40.612529  596882 cri.go:89] found id: ""
	I1217 19:59:40.612556  596882 logs.go:282] 0 containers: []
	W1217 19:59:40.612565  596882 logs.go:284] No container was found matching "kindnet"
	I1217 19:59:40.612571  596882 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1217 19:59:40.612626  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 19:59:40.638754  596882 cri.go:89] found id: ""
	I1217 19:59:40.638784  596882 logs.go:282] 0 containers: []
	W1217 19:59:40.638793  596882 logs.go:284] No container was found matching "storage-provisioner"
	I1217 19:59:40.638809  596882 logs.go:123] Gathering logs for dmesg ...
	I1217 19:59:40.638820  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 19:59:40.656628  596882 logs.go:123] Gathering logs for describe nodes ...
	I1217 19:59:40.656660  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 19:59:40.718279  596882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 19:59:40.718304  596882 logs.go:123] Gathering logs for kube-apiserver [1ca89ebbb5613d16c13191bb7866cf9662b334b933e82c6860753473e8e2060b] ...
	I1217 19:59:40.718320  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1ca89ebbb5613d16c13191bb7866cf9662b334b933e82c6860753473e8e2060b"
	I1217 19:59:40.749816  596882 logs.go:123] Gathering logs for CRI-O ...
	I1217 19:59:40.749848  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 19:59:40.789291  596882 logs.go:123] Gathering logs for container status ...
	I1217 19:59:40.789332  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 19:59:40.823447  596882 logs.go:123] Gathering logs for kubelet ...
	I1217 19:59:40.823474  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 19:59:40.891268  596882 logs.go:123] Gathering logs for kube-scheduler [26afbca819064c614a7c269e4fbe3f73beb12920c9989c7a9adca8a87b8aee29] ...
	I1217 19:59:40.891304  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 26afbca819064c614a7c269e4fbe3f73beb12920c9989c7a9adca8a87b8aee29"
	I1217 19:59:40.921938  596882 logs.go:123] Gathering logs for kube-controller-manager [1dee5fecff78a1a61126f20ed261adbf0b690830e4ecf50ef50f99d3aaad09cb] ...
	I1217 19:59:40.921969  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1dee5fecff78a1a61126f20ed261adbf0b690830e4ecf50ef50f99d3aaad09cb"
	I1217 19:59:40.950934  596882 logs.go:123] Gathering logs for kube-controller-manager [96d62cc516271a9229ae697d73c68f44ce2135124f2d88371c0189bb8de307fb] ...
	I1217 19:59:40.950968  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 96d62cc516271a9229ae697d73c68f44ce2135124f2d88371c0189bb8de307fb"
	I1217 19:59:40.382939  612025 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 19:59:40.883642  612025 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 19:59:41.383305  612025 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 19:59:41.882977  612025 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 19:59:42.383458  612025 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 19:59:42.470150  612025 kubeadm.go:1114] duration metric: took 13.178959144s to wait for elevateKubeSystemPrivileges
	I1217 19:59:42.470191  612025 kubeadm.go:403] duration metric: took 23.971663614s to StartCluster
	I1217 19:59:42.470212  612025 settings.go:142] acquiring lock: {Name:mk01c60672ff2b8f50b037d6096a0a4590636830 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 19:59:42.470292  612025 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22186-372245/kubeconfig
	I1217 19:59:42.471672  612025 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-372245/kubeconfig: {Name:mkbe8926b9014d2af611aee93b1188b72880b6c1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 19:59:42.471932  612025 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1217 19:59:42.471960  612025 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1217 19:59:42.471933  612025 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1217 19:59:42.472029  612025 addons.go:70] Setting storage-provisioner=true in profile "old-k8s-version-894575"
	I1217 19:59:42.472190  612025 config.go:182] Loaded profile config "old-k8s-version-894575": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1217 19:59:42.472206  612025 addons.go:239] Setting addon storage-provisioner=true in "old-k8s-version-894575"
	I1217 19:59:42.472035  612025 addons.go:70] Setting default-storageclass=true in profile "old-k8s-version-894575"
	I1217 19:59:42.472287  612025 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-894575"
	I1217 19:59:42.472255  612025 host.go:66] Checking if "old-k8s-version-894575" exists ...
	I1217 19:59:42.472613  612025 cli_runner.go:164] Run: docker container inspect old-k8s-version-894575 --format={{.State.Status}}
	I1217 19:59:42.472868  612025 cli_runner.go:164] Run: docker container inspect old-k8s-version-894575 --format={{.State.Status}}
	I1217 19:59:42.473672  612025 out.go:179] * Verifying Kubernetes components...
	I1217 19:59:42.475009  612025 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 19:59:42.497695  612025 addons.go:239] Setting addon default-storageclass=true in "old-k8s-version-894575"
	I1217 19:59:42.497748  612025 host.go:66] Checking if "old-k8s-version-894575" exists ...
	I1217 19:59:42.498230  612025 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1217 19:59:42.498241  612025 cli_runner.go:164] Run: docker container inspect old-k8s-version-894575 --format={{.State.Status}}
	I1217 19:59:42.499515  612025 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 19:59:42.499536  612025 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1217 19:59:42.499590  612025 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-894575
	I1217 19:59:42.530066  612025 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1217 19:59:42.530117  612025 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1217 19:59:42.530192  612025 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-894575
	I1217 19:59:42.530842  612025 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/22186-372245/.minikube/machines/old-k8s-version-894575/id_rsa Username:docker}
	I1217 19:59:42.553294  612025 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/22186-372245/.minikube/machines/old-k8s-version-894575/id_rsa Username:docker}
	I1217 19:59:42.568256  612025 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1217 19:59:42.620733  612025 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 19:59:42.650189  612025 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 19:59:42.673914  612025 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1217 19:59:42.792400  612025 start.go:977] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1217 19:59:42.793909  612025 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-894575" to be "Ready" ...
	I1217 19:59:43.037351  612025 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1217 19:59:43.038583  612025 addons.go:530] duration metric: took 566.615369ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1217 19:59:43.297464  612025 kapi.go:214] "coredns" deployment in "kube-system" namespace and "old-k8s-version-894575" context rescaled to 1 replicas
	W1217 19:59:44.798596  612025 node_ready.go:57] node "old-k8s-version-894575" has "Ready":"False" status (will retry)
	W1217 19:59:43.901710  613002 node_ready.go:57] node "no-preload-832842" has "Ready":"False" status (will retry)
	W1217 19:59:46.400051  613002 node_ready.go:57] node "no-preload-832842" has "Ready":"False" status (will retry)
	I1217 19:59:43.480183  596882 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1217 19:59:43.480647  596882 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 19:59:43.480726  596882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 19:59:43.480783  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 19:59:43.518232  596882 cri.go:89] found id: "1ca89ebbb5613d16c13191bb7866cf9662b334b933e82c6860753473e8e2060b"
	I1217 19:59:43.518262  596882 cri.go:89] found id: ""
	I1217 19:59:43.518273  596882 logs.go:282] 1 containers: [1ca89ebbb5613d16c13191bb7866cf9662b334b933e82c6860753473e8e2060b]
	I1217 19:59:43.518337  596882 ssh_runner.go:195] Run: which crictl
	I1217 19:59:43.523736  596882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 19:59:43.523817  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 19:59:43.565315  596882 cri.go:89] found id: ""
	I1217 19:59:43.565344  596882 logs.go:282] 0 containers: []
	W1217 19:59:43.565356  596882 logs.go:284] No container was found matching "etcd"
	I1217 19:59:43.565363  596882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 19:59:43.565430  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 19:59:43.601570  596882 cri.go:89] found id: ""
	I1217 19:59:43.601597  596882 logs.go:282] 0 containers: []
	W1217 19:59:43.601608  596882 logs.go:284] No container was found matching "coredns"
	I1217 19:59:43.601618  596882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 19:59:43.601693  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 19:59:43.636751  596882 cri.go:89] found id: "26afbca819064c614a7c269e4fbe3f73beb12920c9989c7a9adca8a87b8aee29"
	I1217 19:59:43.636772  596882 cri.go:89] found id: ""
	I1217 19:59:43.636787  596882 logs.go:282] 1 containers: [26afbca819064c614a7c269e4fbe3f73beb12920c9989c7a9adca8a87b8aee29]
	I1217 19:59:43.636851  596882 ssh_runner.go:195] Run: which crictl
	I1217 19:59:43.641329  596882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 19:59:43.641408  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 19:59:43.671328  596882 cri.go:89] found id: ""
	I1217 19:59:43.671364  596882 logs.go:282] 0 containers: []
	W1217 19:59:43.671377  596882 logs.go:284] No container was found matching "kube-proxy"
	I1217 19:59:43.671385  596882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 19:59:43.671444  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 19:59:43.705020  596882 cri.go:89] found id: "1dee5fecff78a1a61126f20ed261adbf0b690830e4ecf50ef50f99d3aaad09cb"
	I1217 19:59:43.705046  596882 cri.go:89] found id: "96d62cc516271a9229ae697d73c68f44ce2135124f2d88371c0189bb8de307fb"
	I1217 19:59:43.705052  596882 cri.go:89] found id: ""
	I1217 19:59:43.705062  596882 logs.go:282] 2 containers: [1dee5fecff78a1a61126f20ed261adbf0b690830e4ecf50ef50f99d3aaad09cb 96d62cc516271a9229ae697d73c68f44ce2135124f2d88371c0189bb8de307fb]
	I1217 19:59:43.705145  596882 ssh_runner.go:195] Run: which crictl
	I1217 19:59:43.709433  596882 ssh_runner.go:195] Run: which crictl
	I1217 19:59:43.713466  596882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 19:59:43.713539  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 19:59:43.746015  596882 cri.go:89] found id: ""
	I1217 19:59:43.746045  596882 logs.go:282] 0 containers: []
	W1217 19:59:43.746057  596882 logs.go:284] No container was found matching "kindnet"
	I1217 19:59:43.746064  596882 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1217 19:59:43.746168  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 19:59:43.785258  596882 cri.go:89] found id: ""
	I1217 19:59:43.785290  596882 logs.go:282] 0 containers: []
	W1217 19:59:43.785303  596882 logs.go:284] No container was found matching "storage-provisioner"
	I1217 19:59:43.785321  596882 logs.go:123] Gathering logs for dmesg ...
	I1217 19:59:43.785336  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 19:59:43.808918  596882 logs.go:123] Gathering logs for kube-controller-manager [1dee5fecff78a1a61126f20ed261adbf0b690830e4ecf50ef50f99d3aaad09cb] ...
	I1217 19:59:43.808960  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1dee5fecff78a1a61126f20ed261adbf0b690830e4ecf50ef50f99d3aaad09cb"
	I1217 19:59:43.845935  596882 logs.go:123] Gathering logs for kube-controller-manager [96d62cc516271a9229ae697d73c68f44ce2135124f2d88371c0189bb8de307fb] ...
	I1217 19:59:43.845971  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 96d62cc516271a9229ae697d73c68f44ce2135124f2d88371c0189bb8de307fb"
	I1217 19:59:43.881348  596882 logs.go:123] Gathering logs for CRI-O ...
	I1217 19:59:43.881385  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 19:59:43.944739  596882 logs.go:123] Gathering logs for container status ...
	I1217 19:59:43.944779  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 19:59:43.986673  596882 logs.go:123] Gathering logs for kubelet ...
	I1217 19:59:43.986716  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 19:59:44.080162  596882 logs.go:123] Gathering logs for describe nodes ...
	I1217 19:59:44.080222  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 19:59:44.142775  596882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 19:59:44.142795  596882 logs.go:123] Gathering logs for kube-apiserver [1ca89ebbb5613d16c13191bb7866cf9662b334b933e82c6860753473e8e2060b] ...
	I1217 19:59:44.142810  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1ca89ebbb5613d16c13191bb7866cf9662b334b933e82c6860753473e8e2060b"
	I1217 19:59:44.176881  596882 logs.go:123] Gathering logs for kube-scheduler [26afbca819064c614a7c269e4fbe3f73beb12920c9989c7a9adca8a87b8aee29] ...
	I1217 19:59:44.176919  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 26afbca819064c614a7c269e4fbe3f73beb12920c9989c7a9adca8a87b8aee29"
	I1217 19:59:46.710466  596882 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1217 19:59:46.711004  596882 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 19:59:46.711063  596882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 19:59:46.711165  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 19:59:46.740274  596882 cri.go:89] found id: "1ca89ebbb5613d16c13191bb7866cf9662b334b933e82c6860753473e8e2060b"
	I1217 19:59:46.740302  596882 cri.go:89] found id: ""
	I1217 19:59:46.740316  596882 logs.go:282] 1 containers: [1ca89ebbb5613d16c13191bb7866cf9662b334b933e82c6860753473e8e2060b]
	I1217 19:59:46.740437  596882 ssh_runner.go:195] Run: which crictl
	I1217 19:59:46.744712  596882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 19:59:46.744795  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 19:59:46.773367  596882 cri.go:89] found id: ""
	I1217 19:59:46.773395  596882 logs.go:282] 0 containers: []
	W1217 19:59:46.773408  596882 logs.go:284] No container was found matching "etcd"
	I1217 19:59:46.773416  596882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 19:59:46.773492  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 19:59:46.801877  596882 cri.go:89] found id: ""
	I1217 19:59:46.801910  596882 logs.go:282] 0 containers: []
	W1217 19:59:46.801921  596882 logs.go:284] No container was found matching "coredns"
	I1217 19:59:46.801929  596882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 19:59:46.801992  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 19:59:46.830209  596882 cri.go:89] found id: "26afbca819064c614a7c269e4fbe3f73beb12920c9989c7a9adca8a87b8aee29"
	I1217 19:59:46.830250  596882 cri.go:89] found id: ""
	I1217 19:59:46.830260  596882 logs.go:282] 1 containers: [26afbca819064c614a7c269e4fbe3f73beb12920c9989c7a9adca8a87b8aee29]
	I1217 19:59:46.830319  596882 ssh_runner.go:195] Run: which crictl
	I1217 19:59:46.834571  596882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 19:59:46.834639  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 19:59:46.863489  596882 cri.go:89] found id: ""
	I1217 19:59:46.863518  596882 logs.go:282] 0 containers: []
	W1217 19:59:46.863530  596882 logs.go:284] No container was found matching "kube-proxy"
	I1217 19:59:46.863537  596882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 19:59:46.863590  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 19:59:46.892277  596882 cri.go:89] found id: "1dee5fecff78a1a61126f20ed261adbf0b690830e4ecf50ef50f99d3aaad09cb"
	I1217 19:59:46.892303  596882 cri.go:89] found id: "96d62cc516271a9229ae697d73c68f44ce2135124f2d88371c0189bb8de307fb"
	I1217 19:59:46.892540  596882 cri.go:89] found id: ""
	I1217 19:59:46.892767  596882 logs.go:282] 2 containers: [1dee5fecff78a1a61126f20ed261adbf0b690830e4ecf50ef50f99d3aaad09cb 96d62cc516271a9229ae697d73c68f44ce2135124f2d88371c0189bb8de307fb]
	I1217 19:59:46.892864  596882 ssh_runner.go:195] Run: which crictl
	I1217 19:59:46.897970  596882 ssh_runner.go:195] Run: which crictl
	I1217 19:59:46.902175  596882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 19:59:46.902242  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 19:59:46.930663  596882 cri.go:89] found id: ""
	I1217 19:59:46.930695  596882 logs.go:282] 0 containers: []
	W1217 19:59:46.930708  596882 logs.go:284] No container was found matching "kindnet"
	I1217 19:59:46.930721  596882 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1217 19:59:46.930805  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 19:59:46.960950  596882 cri.go:89] found id: ""
	I1217 19:59:46.960981  596882 logs.go:282] 0 containers: []
	W1217 19:59:46.960992  596882 logs.go:284] No container was found matching "storage-provisioner"
	I1217 19:59:46.961013  596882 logs.go:123] Gathering logs for describe nodes ...
	I1217 19:59:46.961033  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 19:59:47.018065  596882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 19:59:47.018120  596882 logs.go:123] Gathering logs for kube-apiserver [1ca89ebbb5613d16c13191bb7866cf9662b334b933e82c6860753473e8e2060b] ...
	I1217 19:59:47.018143  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1ca89ebbb5613d16c13191bb7866cf9662b334b933e82c6860753473e8e2060b"
	I1217 19:59:47.049449  596882 logs.go:123] Gathering logs for kube-scheduler [26afbca819064c614a7c269e4fbe3f73beb12920c9989c7a9adca8a87b8aee29] ...
	I1217 19:59:47.049483  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 26afbca819064c614a7c269e4fbe3f73beb12920c9989c7a9adca8a87b8aee29"
	I1217 19:59:47.078285  596882 logs.go:123] Gathering logs for kube-controller-manager [1dee5fecff78a1a61126f20ed261adbf0b690830e4ecf50ef50f99d3aaad09cb] ...
	I1217 19:59:47.078319  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1dee5fecff78a1a61126f20ed261adbf0b690830e4ecf50ef50f99d3aaad09cb"
	I1217 19:59:47.104846  596882 logs.go:123] Gathering logs for kube-controller-manager [96d62cc516271a9229ae697d73c68f44ce2135124f2d88371c0189bb8de307fb] ...
	I1217 19:59:47.104873  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 96d62cc516271a9229ae697d73c68f44ce2135124f2d88371c0189bb8de307fb"
	I1217 19:59:47.133138  596882 logs.go:123] Gathering logs for CRI-O ...
	I1217 19:59:47.133170  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 19:59:47.173109  596882 logs.go:123] Gathering logs for kubelet ...
	I1217 19:59:47.173143  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 19:59:47.239232  596882 logs.go:123] Gathering logs for dmesg ...
	I1217 19:59:47.239269  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 19:59:47.255744  596882 logs.go:123] Gathering logs for container status ...
	I1217 19:59:47.255774  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1217 19:59:47.297786  612025 node_ready.go:57] node "old-k8s-version-894575" has "Ready":"False" status (will retry)
	W1217 19:59:49.798061  612025 node_ready.go:57] node "old-k8s-version-894575" has "Ready":"False" status (will retry)
	W1217 19:59:48.400236  613002 node_ready.go:57] node "no-preload-832842" has "Ready":"False" status (will retry)
	W1217 19:59:50.400424  613002 node_ready.go:57] node "no-preload-832842" has "Ready":"False" status (will retry)
	I1217 19:59:50.900321  613002 node_ready.go:49] node "no-preload-832842" is "Ready"
	I1217 19:59:50.900360  613002 node_ready.go:38] duration metric: took 13.003221681s for node "no-preload-832842" to be "Ready" ...
	I1217 19:59:50.900388  613002 api_server.go:52] waiting for apiserver process to appear ...
	I1217 19:59:50.900450  613002 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 19:59:50.913774  613002 api_server.go:72] duration metric: took 13.353848727s to wait for apiserver process to appear ...
	I1217 19:59:50.913809  613002 api_server.go:88] waiting for apiserver healthz status ...
	I1217 19:59:50.913828  613002 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1217 19:59:50.919172  613002 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1217 19:59:50.920038  613002 api_server.go:141] control plane version: v1.35.0-rc.1
	I1217 19:59:50.920061  613002 api_server.go:131] duration metric: took 6.246549ms to wait for apiserver health ...
	I1217 19:59:50.920071  613002 system_pods.go:43] waiting for kube-system pods to appear ...
	I1217 19:59:50.923436  613002 system_pods.go:59] 8 kube-system pods found
	I1217 19:59:50.923465  613002 system_pods.go:61] "coredns-7d764666f9-988jw" [2e2dabc4-5e32-46d9-a290-4dec02241395] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 19:59:50.923471  613002 system_pods.go:61] "etcd-no-preload-832842" [9a0980a1-4707-453a-a6b2-1c3cc74d35a6] Running
	I1217 19:59:50.923477  613002 system_pods.go:61] "kindnet-t5x5v" [4d27aa06-e030-44a0-880e-a5ae02e7b951] Running
	I1217 19:59:50.923480  613002 system_pods.go:61] "kube-apiserver-no-preload-832842" [092c1856-6e6d-4658-863b-1dd0cf168837] Running
	I1217 19:59:50.923485  613002 system_pods.go:61] "kube-controller-manager-no-preload-832842" [f73ec95a-6e24-41c1-881a-eb6936bbb4a7] Running
	I1217 19:59:50.923488  613002 system_pods.go:61] "kube-proxy-jc5dd" [5c5c87dc-6dbc-4133-9e90-d0650e6a5048] Running
	I1217 19:59:50.923492  613002 system_pods.go:61] "kube-scheduler-no-preload-832842" [36446c37-e6da-44fe-93aa-b30ba79a4db9] Running
	I1217 19:59:50.923501  613002 system_pods.go:61] "storage-provisioner" [d36df8ec-ccab-401f-9ab8-c2a6c8f1e5ed] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1217 19:59:50.923506  613002 system_pods.go:74] duration metric: took 3.398333ms to wait for pod list to return data ...
	I1217 19:59:50.923516  613002 default_sa.go:34] waiting for default service account to be created ...
	I1217 19:59:50.925827  613002 default_sa.go:45] found service account: "default"
	I1217 19:59:50.925845  613002 default_sa.go:55] duration metric: took 2.321212ms for default service account to be created ...
	I1217 19:59:50.925853  613002 system_pods.go:116] waiting for k8s-apps to be running ...
	I1217 19:59:50.928542  613002 system_pods.go:86] 8 kube-system pods found
	I1217 19:59:50.928579  613002 system_pods.go:89] "coredns-7d764666f9-988jw" [2e2dabc4-5e32-46d9-a290-4dec02241395] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 19:59:50.928589  613002 system_pods.go:89] "etcd-no-preload-832842" [9a0980a1-4707-453a-a6b2-1c3cc74d35a6] Running
	I1217 19:59:50.928599  613002 system_pods.go:89] "kindnet-t5x5v" [4d27aa06-e030-44a0-880e-a5ae02e7b951] Running
	I1217 19:59:50.928610  613002 system_pods.go:89] "kube-apiserver-no-preload-832842" [092c1856-6e6d-4658-863b-1dd0cf168837] Running
	I1217 19:59:50.928616  613002 system_pods.go:89] "kube-controller-manager-no-preload-832842" [f73ec95a-6e24-41c1-881a-eb6936bbb4a7] Running
	I1217 19:59:50.928622  613002 system_pods.go:89] "kube-proxy-jc5dd" [5c5c87dc-6dbc-4133-9e90-d0650e6a5048] Running
	I1217 19:59:50.928627  613002 system_pods.go:89] "kube-scheduler-no-preload-832842" [36446c37-e6da-44fe-93aa-b30ba79a4db9] Running
	I1217 19:59:50.928634  613002 system_pods.go:89] "storage-provisioner" [d36df8ec-ccab-401f-9ab8-c2a6c8f1e5ed] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1217 19:59:50.928677  613002 retry.go:31] will retry after 250.930958ms: missing components: kube-dns
	I1217 19:59:51.184136  613002 system_pods.go:86] 8 kube-system pods found
	I1217 19:59:51.184174  613002 system_pods.go:89] "coredns-7d764666f9-988jw" [2e2dabc4-5e32-46d9-a290-4dec02241395] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 19:59:51.184183  613002 system_pods.go:89] "etcd-no-preload-832842" [9a0980a1-4707-453a-a6b2-1c3cc74d35a6] Running
	I1217 19:59:51.184192  613002 system_pods.go:89] "kindnet-t5x5v" [4d27aa06-e030-44a0-880e-a5ae02e7b951] Running
	I1217 19:59:51.184201  613002 system_pods.go:89] "kube-apiserver-no-preload-832842" [092c1856-6e6d-4658-863b-1dd0cf168837] Running
	I1217 19:59:51.184207  613002 system_pods.go:89] "kube-controller-manager-no-preload-832842" [f73ec95a-6e24-41c1-881a-eb6936bbb4a7] Running
	I1217 19:59:51.184212  613002 system_pods.go:89] "kube-proxy-jc5dd" [5c5c87dc-6dbc-4133-9e90-d0650e6a5048] Running
	I1217 19:59:51.184218  613002 system_pods.go:89] "kube-scheduler-no-preload-832842" [36446c37-e6da-44fe-93aa-b30ba79a4db9] Running
	I1217 19:59:51.184226  613002 system_pods.go:89] "storage-provisioner" [d36df8ec-ccab-401f-9ab8-c2a6c8f1e5ed] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1217 19:59:51.184250  613002 retry.go:31] will retry after 376.252468ms: missing components: kube-dns
	I1217 19:59:51.564522  613002 system_pods.go:86] 8 kube-system pods found
	I1217 19:59:51.564553  613002 system_pods.go:89] "coredns-7d764666f9-988jw" [2e2dabc4-5e32-46d9-a290-4dec02241395] Running
	I1217 19:59:51.564558  613002 system_pods.go:89] "etcd-no-preload-832842" [9a0980a1-4707-453a-a6b2-1c3cc74d35a6] Running
	I1217 19:59:51.564562  613002 system_pods.go:89] "kindnet-t5x5v" [4d27aa06-e030-44a0-880e-a5ae02e7b951] Running
	I1217 19:59:51.564566  613002 system_pods.go:89] "kube-apiserver-no-preload-832842" [092c1856-6e6d-4658-863b-1dd0cf168837] Running
	I1217 19:59:51.564577  613002 system_pods.go:89] "kube-controller-manager-no-preload-832842" [f73ec95a-6e24-41c1-881a-eb6936bbb4a7] Running
	I1217 19:59:51.564580  613002 system_pods.go:89] "kube-proxy-jc5dd" [5c5c87dc-6dbc-4133-9e90-d0650e6a5048] Running
	I1217 19:59:51.564583  613002 system_pods.go:89] "kube-scheduler-no-preload-832842" [36446c37-e6da-44fe-93aa-b30ba79a4db9] Running
	I1217 19:59:51.564587  613002 system_pods.go:89] "storage-provisioner" [d36df8ec-ccab-401f-9ab8-c2a6c8f1e5ed] Running
	I1217 19:59:51.564595  613002 system_pods.go:126] duration metric: took 638.736718ms to wait for k8s-apps to be running ...
	I1217 19:59:51.564602  613002 system_svc.go:44] waiting for kubelet service to be running ....
	I1217 19:59:51.564649  613002 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 19:59:51.578626  613002 system_svc.go:56] duration metric: took 14.012455ms WaitForService to wait for kubelet
	I1217 19:59:51.578656  613002 kubeadm.go:587] duration metric: took 14.01873722s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1217 19:59:51.578679  613002 node_conditions.go:102] verifying NodePressure condition ...
	I1217 19:59:51.581764  613002 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1217 19:59:51.581791  613002 node_conditions.go:123] node cpu capacity is 8
	I1217 19:59:51.581806  613002 node_conditions.go:105] duration metric: took 3.122744ms to run NodePressure ...
	I1217 19:59:51.581819  613002 start.go:242] waiting for startup goroutines ...
	I1217 19:59:51.581826  613002 start.go:247] waiting for cluster config update ...
	I1217 19:59:51.581836  613002 start.go:256] writing updated cluster config ...
	I1217 19:59:51.582177  613002 ssh_runner.go:195] Run: rm -f paused
	I1217 19:59:51.586289  613002 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1217 19:59:51.589582  613002 pod_ready.go:83] waiting for pod "coredns-7d764666f9-988jw" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 19:59:51.593968  613002 pod_ready.go:94] pod "coredns-7d764666f9-988jw" is "Ready"
	I1217 19:59:51.593996  613002 pod_ready.go:86] duration metric: took 4.395205ms for pod "coredns-7d764666f9-988jw" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 19:59:51.595975  613002 pod_ready.go:83] waiting for pod "etcd-no-preload-832842" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 19:59:51.599696  613002 pod_ready.go:94] pod "etcd-no-preload-832842" is "Ready"
	I1217 19:59:51.599716  613002 pod_ready.go:86] duration metric: took 3.718479ms for pod "etcd-no-preload-832842" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 19:59:51.601616  613002 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-832842" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 19:59:51.605249  613002 pod_ready.go:94] pod "kube-apiserver-no-preload-832842" is "Ready"
	I1217 19:59:51.605278  613002 pod_ready.go:86] duration metric: took 3.640206ms for pod "kube-apiserver-no-preload-832842" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 19:59:51.607229  613002 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-832842" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 19:59:51.990405  613002 pod_ready.go:94] pod "kube-controller-manager-no-preload-832842" is "Ready"
	I1217 19:59:51.990437  613002 pod_ready.go:86] duration metric: took 383.184181ms for pod "kube-controller-manager-no-preload-832842" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 19:59:49.789056  596882 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1217 19:59:49.789585  596882 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 19:59:49.789657  596882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 19:59:49.789736  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 19:59:49.819917  596882 cri.go:89] found id: "1ca89ebbb5613d16c13191bb7866cf9662b334b933e82c6860753473e8e2060b"
	I1217 19:59:49.819962  596882 cri.go:89] found id: ""
	I1217 19:59:49.819976  596882 logs.go:282] 1 containers: [1ca89ebbb5613d16c13191bb7866cf9662b334b933e82c6860753473e8e2060b]
	I1217 19:59:49.820049  596882 ssh_runner.go:195] Run: which crictl
	I1217 19:59:49.824770  596882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 19:59:49.824849  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 19:59:49.853759  596882 cri.go:89] found id: ""
	I1217 19:59:49.853788  596882 logs.go:282] 0 containers: []
	W1217 19:59:49.853797  596882 logs.go:284] No container was found matching "etcd"
	I1217 19:59:49.853803  596882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 19:59:49.853865  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 19:59:49.882284  596882 cri.go:89] found id: ""
	I1217 19:59:49.882314  596882 logs.go:282] 0 containers: []
	W1217 19:59:49.882326  596882 logs.go:284] No container was found matching "coredns"
	I1217 19:59:49.882334  596882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 19:59:49.882399  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 19:59:49.911284  596882 cri.go:89] found id: "26afbca819064c614a7c269e4fbe3f73beb12920c9989c7a9adca8a87b8aee29"
	I1217 19:59:49.911316  596882 cri.go:89] found id: ""
	I1217 19:59:49.911331  596882 logs.go:282] 1 containers: [26afbca819064c614a7c269e4fbe3f73beb12920c9989c7a9adca8a87b8aee29]
	I1217 19:59:49.911392  596882 ssh_runner.go:195] Run: which crictl
	I1217 19:59:49.915472  596882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 19:59:49.915537  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 19:59:49.943673  596882 cri.go:89] found id: ""
	I1217 19:59:49.943699  596882 logs.go:282] 0 containers: []
	W1217 19:59:49.943707  596882 logs.go:284] No container was found matching "kube-proxy"
	I1217 19:59:49.943713  596882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 19:59:49.943770  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 19:59:49.972225  596882 cri.go:89] found id: "1dee5fecff78a1a61126f20ed261adbf0b690830e4ecf50ef50f99d3aaad09cb"
	I1217 19:59:49.972250  596882 cri.go:89] found id: "96d62cc516271a9229ae697d73c68f44ce2135124f2d88371c0189bb8de307fb"
	I1217 19:59:49.972254  596882 cri.go:89] found id: ""
	I1217 19:59:49.972264  596882 logs.go:282] 2 containers: [1dee5fecff78a1a61126f20ed261adbf0b690830e4ecf50ef50f99d3aaad09cb 96d62cc516271a9229ae697d73c68f44ce2135124f2d88371c0189bb8de307fb]
	I1217 19:59:49.972327  596882 ssh_runner.go:195] Run: which crictl
	I1217 19:59:49.976630  596882 ssh_runner.go:195] Run: which crictl
	I1217 19:59:49.980609  596882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 19:59:49.980686  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 19:59:50.008940  596882 cri.go:89] found id: ""
	I1217 19:59:50.008986  596882 logs.go:282] 0 containers: []
	W1217 19:59:50.008999  596882 logs.go:284] No container was found matching "kindnet"
	I1217 19:59:50.009007  596882 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1217 19:59:50.009072  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 19:59:50.036325  596882 cri.go:89] found id: ""
	I1217 19:59:50.036349  596882 logs.go:282] 0 containers: []
	W1217 19:59:50.036357  596882 logs.go:284] No container was found matching "storage-provisioner"
	I1217 19:59:50.036374  596882 logs.go:123] Gathering logs for kubelet ...
	I1217 19:59:50.036386  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 19:59:50.111248  596882 logs.go:123] Gathering logs for dmesg ...
	I1217 19:59:50.111289  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 19:59:50.129160  596882 logs.go:123] Gathering logs for kube-scheduler [26afbca819064c614a7c269e4fbe3f73beb12920c9989c7a9adca8a87b8aee29] ...
	I1217 19:59:50.129190  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 26afbca819064c614a7c269e4fbe3f73beb12920c9989c7a9adca8a87b8aee29"
	I1217 19:59:50.157595  596882 logs.go:123] Gathering logs for kube-controller-manager [96d62cc516271a9229ae697d73c68f44ce2135124f2d88371c0189bb8de307fb] ...
	I1217 19:59:50.157623  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 96d62cc516271a9229ae697d73c68f44ce2135124f2d88371c0189bb8de307fb"
	W1217 19:59:50.186611  596882 logs.go:130] failed kube-controller-manager [96d62cc516271a9229ae697d73c68f44ce2135124f2d88371c0189bb8de307fb]: command: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 96d62cc516271a9229ae697d73c68f44ce2135124f2d88371c0189bb8de307fb" /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 96d62cc516271a9229ae697d73c68f44ce2135124f2d88371c0189bb8de307fb": Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T19:59:50Z" level=fatal msg="failed to try resolving symlinks in path \"/var/log/pods/kube-system_kube-controller-manager-kubernetes-upgrade-322567_4c29c87512afff0c3b350e3ae103d245/kube-controller-manager/1.log\": lstat /var/log/pods/kube-system_kube-controller-manager-kubernetes-upgrade-322567_4c29c87512afff0c3b350e3ae103d245/kube-controller-manager/1.log: no such file or directory"
	 output: 
	** stderr ** 
	time="2025-12-17T19:59:50Z" level=fatal msg="failed to try resolving symlinks in path \"/var/log/pods/kube-system_kube-controller-manager-kubernetes-upgrade-322567_4c29c87512afff0c3b350e3ae103d245/kube-controller-manager/1.log\": lstat /var/log/pods/kube-system_kube-controller-manager-kubernetes-upgrade-322567_4c29c87512afff0c3b350e3ae103d245/kube-controller-manager/1.log: no such file or directory"
	
	** /stderr **
	I1217 19:59:50.186639  596882 logs.go:123] Gathering logs for describe nodes ...
	I1217 19:59:50.186659  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 19:59:50.248285  596882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 19:59:50.248312  596882 logs.go:123] Gathering logs for kube-apiserver [1ca89ebbb5613d16c13191bb7866cf9662b334b933e82c6860753473e8e2060b] ...
	I1217 19:59:50.248328  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1ca89ebbb5613d16c13191bb7866cf9662b334b933e82c6860753473e8e2060b"
	I1217 19:59:50.281745  596882 logs.go:123] Gathering logs for kube-controller-manager [1dee5fecff78a1a61126f20ed261adbf0b690830e4ecf50ef50f99d3aaad09cb] ...
	I1217 19:59:50.281779  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1dee5fecff78a1a61126f20ed261adbf0b690830e4ecf50ef50f99d3aaad09cb"
	I1217 19:59:50.311990  596882 logs.go:123] Gathering logs for CRI-O ...
	I1217 19:59:50.312023  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 19:59:50.356736  596882 logs.go:123] Gathering logs for container status ...
	I1217 19:59:50.356774  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 19:59:52.191067  613002 pod_ready.go:83] waiting for pod "kube-proxy-jc5dd" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 19:59:52.590415  613002 pod_ready.go:94] pod "kube-proxy-jc5dd" is "Ready"
	I1217 19:59:52.590445  613002 pod_ready.go:86] duration metric: took 399.322512ms for pod "kube-proxy-jc5dd" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 19:59:52.790680  613002 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-832842" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 19:59:53.190303  613002 pod_ready.go:94] pod "kube-scheduler-no-preload-832842" is "Ready"
	I1217 19:59:53.190337  613002 pod_ready.go:86] duration metric: took 399.629652ms for pod "kube-scheduler-no-preload-832842" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 19:59:53.190354  613002 pod_ready.go:40] duration metric: took 1.604032629s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1217 19:59:53.236152  613002 start.go:625] kubectl: 1.35.0, cluster: 1.35.0-rc.1 (minor skew: 0)
	I1217 19:59:53.237766  613002 out.go:179] * Done! kubectl is now configured to use "no-preload-832842" cluster and "default" namespace by default
	W1217 19:59:52.297727  612025 node_ready.go:57] node "old-k8s-version-894575" has "Ready":"False" status (will retry)
	I1217 19:59:54.797471  612025 node_ready.go:49] node "old-k8s-version-894575" is "Ready"
	I1217 19:59:54.797501  612025 node_ready.go:38] duration metric: took 12.002996697s for node "old-k8s-version-894575" to be "Ready" ...
	I1217 19:59:54.797528  612025 api_server.go:52] waiting for apiserver process to appear ...
	I1217 19:59:54.797586  612025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 19:59:54.809929  612025 api_server.go:72] duration metric: took 12.337857281s to wait for apiserver process to appear ...
	I1217 19:59:54.809963  612025 api_server.go:88] waiting for apiserver healthz status ...
	I1217 19:59:54.809984  612025 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1217 19:59:54.815168  612025 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1217 19:59:54.816387  612025 api_server.go:141] control plane version: v1.28.0
	I1217 19:59:54.816414  612025 api_server.go:131] duration metric: took 6.443801ms to wait for apiserver health ...
	I1217 19:59:54.816423  612025 system_pods.go:43] waiting for kube-system pods to appear ...
	I1217 19:59:54.819975  612025 system_pods.go:59] 8 kube-system pods found
	I1217 19:59:54.820018  612025 system_pods.go:61] "coredns-5dd5756b68-gbhs5" [d30f3f85-9002-4cf4-b827-6bb0dfd90bd4] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 19:59:54.820032  612025 system_pods.go:61] "etcd-old-k8s-version-894575" [db81d873-90e0-4564-965c-d65b708e7621] Running
	I1217 19:59:54.820038  612025 system_pods.go:61] "kindnet-p8d9f" [73923d5d-ed13-4b01-ad91-71ed716cbd2b] Running
	I1217 19:59:54.820042  612025 system_pods.go:61] "kube-apiserver-old-k8s-version-894575" [47a706f5-62dc-49b3-ba75-772c7c3c0564] Running
	I1217 19:59:54.820045  612025 system_pods.go:61] "kube-controller-manager-old-k8s-version-894575" [d3e24b0c-4542-4d95-93f2-b45d48cd0775] Running
	I1217 19:59:54.820049  612025 system_pods.go:61] "kube-proxy-bdzb6" [6c886a0f-40d4-4f9a-a23e-e3d966a937cd] Running
	I1217 19:59:54.820052  612025 system_pods.go:61] "kube-scheduler-old-k8s-version-894575" [96ff17e5-035d-46ff-aea1-8c356a117abb] Running
	I1217 19:59:54.820058  612025 system_pods.go:61] "storage-provisioner" [0e722d4c-f50c-4835-b78b-bd7a203e9014] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1217 19:59:54.820068  612025 system_pods.go:74] duration metric: took 3.638852ms to wait for pod list to return data ...
	I1217 19:59:54.820098  612025 default_sa.go:34] waiting for default service account to be created ...
	I1217 19:59:54.822576  612025 default_sa.go:45] found service account: "default"
	I1217 19:59:54.822600  612025 default_sa.go:55] duration metric: took 2.491238ms for default service account to be created ...
	I1217 19:59:54.822610  612025 system_pods.go:116] waiting for k8s-apps to be running ...
	I1217 19:59:54.826610  612025 system_pods.go:86] 8 kube-system pods found
	I1217 19:59:54.826648  612025 system_pods.go:89] "coredns-5dd5756b68-gbhs5" [d30f3f85-9002-4cf4-b827-6bb0dfd90bd4] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 19:59:54.826657  612025 system_pods.go:89] "etcd-old-k8s-version-894575" [db81d873-90e0-4564-965c-d65b708e7621] Running
	I1217 19:59:54.826668  612025 system_pods.go:89] "kindnet-p8d9f" [73923d5d-ed13-4b01-ad91-71ed716cbd2b] Running
	I1217 19:59:54.826679  612025 system_pods.go:89] "kube-apiserver-old-k8s-version-894575" [47a706f5-62dc-49b3-ba75-772c7c3c0564] Running
	I1217 19:59:54.826687  612025 system_pods.go:89] "kube-controller-manager-old-k8s-version-894575" [d3e24b0c-4542-4d95-93f2-b45d48cd0775] Running
	I1217 19:59:54.826698  612025 system_pods.go:89] "kube-proxy-bdzb6" [6c886a0f-40d4-4f9a-a23e-e3d966a937cd] Running
	I1217 19:59:54.826704  612025 system_pods.go:89] "kube-scheduler-old-k8s-version-894575" [96ff17e5-035d-46ff-aea1-8c356a117abb] Running
	I1217 19:59:54.826718  612025 system_pods.go:89] "storage-provisioner" [0e722d4c-f50c-4835-b78b-bd7a203e9014] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1217 19:59:54.826766  612025 retry.go:31] will retry after 257.442605ms: missing components: kube-dns
	I1217 19:59:52.890123  596882 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1217 19:59:55.090303  612025 system_pods.go:86] 8 kube-system pods found
	I1217 19:59:55.090342  612025 system_pods.go:89] "coredns-5dd5756b68-gbhs5" [d30f3f85-9002-4cf4-b827-6bb0dfd90bd4] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 19:59:55.090351  612025 system_pods.go:89] "etcd-old-k8s-version-894575" [db81d873-90e0-4564-965c-d65b708e7621] Running
	I1217 19:59:55.090361  612025 system_pods.go:89] "kindnet-p8d9f" [73923d5d-ed13-4b01-ad91-71ed716cbd2b] Running
	I1217 19:59:55.090366  612025 system_pods.go:89] "kube-apiserver-old-k8s-version-894575" [47a706f5-62dc-49b3-ba75-772c7c3c0564] Running
	I1217 19:59:55.090372  612025 system_pods.go:89] "kube-controller-manager-old-k8s-version-894575" [d3e24b0c-4542-4d95-93f2-b45d48cd0775] Running
	I1217 19:59:55.090377  612025 system_pods.go:89] "kube-proxy-bdzb6" [6c886a0f-40d4-4f9a-a23e-e3d966a937cd] Running
	I1217 19:59:55.090382  612025 system_pods.go:89] "kube-scheduler-old-k8s-version-894575" [96ff17e5-035d-46ff-aea1-8c356a117abb] Running
	I1217 19:59:55.090398  612025 system_pods.go:89] "storage-provisioner" [0e722d4c-f50c-4835-b78b-bd7a203e9014] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1217 19:59:55.090427  612025 retry.go:31] will retry after 245.454795ms: missing components: kube-dns
	I1217 19:59:55.340574  612025 system_pods.go:86] 8 kube-system pods found
	I1217 19:59:55.340614  612025 system_pods.go:89] "coredns-5dd5756b68-gbhs5" [d30f3f85-9002-4cf4-b827-6bb0dfd90bd4] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 19:59:55.340621  612025 system_pods.go:89] "etcd-old-k8s-version-894575" [db81d873-90e0-4564-965c-d65b708e7621] Running
	I1217 19:59:55.340629  612025 system_pods.go:89] "kindnet-p8d9f" [73923d5d-ed13-4b01-ad91-71ed716cbd2b] Running
	I1217 19:59:55.340632  612025 system_pods.go:89] "kube-apiserver-old-k8s-version-894575" [47a706f5-62dc-49b3-ba75-772c7c3c0564] Running
	I1217 19:59:55.340639  612025 system_pods.go:89] "kube-controller-manager-old-k8s-version-894575" [d3e24b0c-4542-4d95-93f2-b45d48cd0775] Running
	I1217 19:59:55.340643  612025 system_pods.go:89] "kube-proxy-bdzb6" [6c886a0f-40d4-4f9a-a23e-e3d966a937cd] Running
	I1217 19:59:55.340646  612025 system_pods.go:89] "kube-scheduler-old-k8s-version-894575" [96ff17e5-035d-46ff-aea1-8c356a117abb] Running
	I1217 19:59:55.340650  612025 system_pods.go:89] "storage-provisioner" [0e722d4c-f50c-4835-b78b-bd7a203e9014] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1217 19:59:55.340665  612025 retry.go:31] will retry after 386.121813ms: missing components: kube-dns
	I1217 19:59:55.730667  612025 system_pods.go:86] 8 kube-system pods found
	I1217 19:59:55.730703  612025 system_pods.go:89] "coredns-5dd5756b68-gbhs5" [d30f3f85-9002-4cf4-b827-6bb0dfd90bd4] Running
	I1217 19:59:55.730709  612025 system_pods.go:89] "etcd-old-k8s-version-894575" [db81d873-90e0-4564-965c-d65b708e7621] Running
	I1217 19:59:55.730713  612025 system_pods.go:89] "kindnet-p8d9f" [73923d5d-ed13-4b01-ad91-71ed716cbd2b] Running
	I1217 19:59:55.730716  612025 system_pods.go:89] "kube-apiserver-old-k8s-version-894575" [47a706f5-62dc-49b3-ba75-772c7c3c0564] Running
	I1217 19:59:55.730720  612025 system_pods.go:89] "kube-controller-manager-old-k8s-version-894575" [d3e24b0c-4542-4d95-93f2-b45d48cd0775] Running
	I1217 19:59:55.730723  612025 system_pods.go:89] "kube-proxy-bdzb6" [6c886a0f-40d4-4f9a-a23e-e3d966a937cd] Running
	I1217 19:59:55.730727  612025 system_pods.go:89] "kube-scheduler-old-k8s-version-894575" [96ff17e5-035d-46ff-aea1-8c356a117abb] Running
	I1217 19:59:55.730729  612025 system_pods.go:89] "storage-provisioner" [0e722d4c-f50c-4835-b78b-bd7a203e9014] Running
	I1217 19:59:55.730737  612025 system_pods.go:126] duration metric: took 908.122007ms to wait for k8s-apps to be running ...
	I1217 19:59:55.730755  612025 system_svc.go:44] waiting for kubelet service to be running ....
	I1217 19:59:55.730805  612025 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 19:59:55.743927  612025 system_svc.go:56] duration metric: took 13.156324ms WaitForService to wait for kubelet
	I1217 19:59:55.743958  612025 kubeadm.go:587] duration metric: took 13.271894959s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1217 19:59:55.743981  612025 node_conditions.go:102] verifying NodePressure condition ...
	I1217 19:59:55.747182  612025 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1217 19:59:55.747242  612025 node_conditions.go:123] node cpu capacity is 8
	I1217 19:59:55.747267  612025 node_conditions.go:105] duration metric: took 3.279192ms to run NodePressure ...
	I1217 19:59:55.747282  612025 start.go:242] waiting for startup goroutines ...
	I1217 19:59:55.747292  612025 start.go:247] waiting for cluster config update ...
	I1217 19:59:55.747306  612025 start.go:256] writing updated cluster config ...
	I1217 19:59:55.747638  612025 ssh_runner.go:195] Run: rm -f paused
	I1217 19:59:55.752094  612025 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1217 19:59:55.755926  612025 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-gbhs5" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 19:59:55.759989  612025 pod_ready.go:94] pod "coredns-5dd5756b68-gbhs5" is "Ready"
	I1217 19:59:55.760009  612025 pod_ready.go:86] duration metric: took 4.059125ms for pod "coredns-5dd5756b68-gbhs5" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 19:59:55.762497  612025 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-894575" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 19:59:55.766155  612025 pod_ready.go:94] pod "etcd-old-k8s-version-894575" is "Ready"
	I1217 19:59:55.766173  612025 pod_ready.go:86] duration metric: took 3.656843ms for pod "etcd-old-k8s-version-894575" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 19:59:55.768581  612025 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-894575" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 19:59:55.772199  612025 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-894575" is "Ready"
	I1217 19:59:55.772220  612025 pod_ready.go:86] duration metric: took 3.619593ms for pod "kube-apiserver-old-k8s-version-894575" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 19:59:55.774707  612025 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-894575" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 19:59:56.155733  612025 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-894575" is "Ready"
	I1217 19:59:56.155766  612025 pod_ready.go:86] duration metric: took 381.041505ms for pod "kube-controller-manager-old-k8s-version-894575" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 19:59:56.356678  612025 pod_ready.go:83] waiting for pod "kube-proxy-bdzb6" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 19:59:56.755891  612025 pod_ready.go:94] pod "kube-proxy-bdzb6" is "Ready"
	I1217 19:59:56.755920  612025 pod_ready.go:86] duration metric: took 399.214523ms for pod "kube-proxy-bdzb6" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 19:59:56.956567  612025 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-894575" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 19:59:57.356662  612025 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-894575" is "Ready"
	I1217 19:59:57.356691  612025 pod_ready.go:86] duration metric: took 400.097199ms for pod "kube-scheduler-old-k8s-version-894575" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 19:59:57.356705  612025 pod_ready.go:40] duration metric: took 1.604578881s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1217 19:59:57.405563  612025 start.go:625] kubectl: 1.35.0, cluster: 1.28.0 (minor skew: 7)
	I1217 19:59:57.407226  612025 out.go:203] 
	W1217 19:59:57.408483  612025 out.go:285] ! /usr/local/bin/kubectl is version 1.35.0, which may have incompatibilities with Kubernetes 1.28.0.
	I1217 19:59:57.409590  612025 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1217 19:59:57.410881  612025 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-894575" cluster and "default" namespace by default
	I1217 19:59:57.891275  596882 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1217 19:59:57.891343  596882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 19:59:57.891398  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 19:59:57.920327  596882 cri.go:89] found id: "6822d1aff73905867cd00c8bd3d996a8d98a37c238f53bab351d576f0d6b34fc"
	I1217 19:59:57.920358  596882 cri.go:89] found id: "1ca89ebbb5613d16c13191bb7866cf9662b334b933e82c6860753473e8e2060b"
	I1217 19:59:57.920364  596882 cri.go:89] found id: ""
	I1217 19:59:57.920376  596882 logs.go:282] 2 containers: [6822d1aff73905867cd00c8bd3d996a8d98a37c238f53bab351d576f0d6b34fc 1ca89ebbb5613d16c13191bb7866cf9662b334b933e82c6860753473e8e2060b]
	I1217 19:59:57.920433  596882 ssh_runner.go:195] Run: which crictl
	I1217 19:59:57.924597  596882 ssh_runner.go:195] Run: which crictl
	I1217 19:59:57.928344  596882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 19:59:57.928422  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 19:59:57.958281  596882 cri.go:89] found id: ""
	I1217 19:59:57.958306  596882 logs.go:282] 0 containers: []
	W1217 19:59:57.958315  596882 logs.go:284] No container was found matching "etcd"
	I1217 19:59:57.958320  596882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 19:59:57.958370  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 19:59:57.986230  596882 cri.go:89] found id: ""
	I1217 19:59:57.986257  596882 logs.go:282] 0 containers: []
	W1217 19:59:57.986266  596882 logs.go:284] No container was found matching "coredns"
	I1217 19:59:57.986272  596882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 19:59:57.986356  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 19:59:58.014902  596882 cri.go:89] found id: "26afbca819064c614a7c269e4fbe3f73beb12920c9989c7a9adca8a87b8aee29"
	I1217 19:59:58.014931  596882 cri.go:89] found id: ""
	I1217 19:59:58.014943  596882 logs.go:282] 1 containers: [26afbca819064c614a7c269e4fbe3f73beb12920c9989c7a9adca8a87b8aee29]
	I1217 19:59:58.014996  596882 ssh_runner.go:195] Run: which crictl
	I1217 19:59:58.019404  596882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 19:59:58.019483  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 19:59:58.046749  596882 cri.go:89] found id: ""
	I1217 19:59:58.046772  596882 logs.go:282] 0 containers: []
	W1217 19:59:58.046781  596882 logs.go:284] No container was found matching "kube-proxy"
	I1217 19:59:58.046788  596882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 19:59:58.046850  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 19:59:58.074828  596882 cri.go:89] found id: "1dee5fecff78a1a61126f20ed261adbf0b690830e4ecf50ef50f99d3aaad09cb"
	I1217 19:59:58.074850  596882 cri.go:89] found id: ""
	I1217 19:59:58.074858  596882 logs.go:282] 1 containers: [1dee5fecff78a1a61126f20ed261adbf0b690830e4ecf50ef50f99d3aaad09cb]
	I1217 19:59:58.074927  596882 ssh_runner.go:195] Run: which crictl
	I1217 19:59:58.078847  596882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 19:59:58.078910  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 19:59:58.107098  596882 cri.go:89] found id: ""
	I1217 19:59:58.107126  596882 logs.go:282] 0 containers: []
	W1217 19:59:58.107135  596882 logs.go:284] No container was found matching "kindnet"
	I1217 19:59:58.107142  596882 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1217 19:59:58.107217  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 19:59:58.136602  596882 cri.go:89] found id: ""
	I1217 19:59:58.136626  596882 logs.go:282] 0 containers: []
	W1217 19:59:58.136635  596882 logs.go:284] No container was found matching "storage-provisioner"
	I1217 19:59:58.136650  596882 logs.go:123] Gathering logs for kube-apiserver [1ca89ebbb5613d16c13191bb7866cf9662b334b933e82c6860753473e8e2060b] ...
	I1217 19:59:58.136662  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1ca89ebbb5613d16c13191bb7866cf9662b334b933e82c6860753473e8e2060b"
	I1217 19:59:58.168742  596882 logs.go:123] Gathering logs for kube-scheduler [26afbca819064c614a7c269e4fbe3f73beb12920c9989c7a9adca8a87b8aee29] ...
	I1217 19:59:58.168777  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 26afbca819064c614a7c269e4fbe3f73beb12920c9989c7a9adca8a87b8aee29"
	I1217 19:59:58.198804  596882 logs.go:123] Gathering logs for kube-controller-manager [1dee5fecff78a1a61126f20ed261adbf0b690830e4ecf50ef50f99d3aaad09cb] ...
	I1217 19:59:58.198832  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1dee5fecff78a1a61126f20ed261adbf0b690830e4ecf50ef50f99d3aaad09cb"
	I1217 19:59:58.227814  596882 logs.go:123] Gathering logs for kubelet ...
	I1217 19:59:58.227847  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 19:59:58.302151  596882 logs.go:123] Gathering logs for dmesg ...
	I1217 19:59:58.302192  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 19:59:58.319927  596882 logs.go:123] Gathering logs for describe nodes ...
	I1217 19:59:58.319975  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	
	
	==> CRI-O <==
	Dec 17 19:59:55 old-k8s-version-894575 crio[769]: time="2025-12-17T19:59:55.104006826Z" level=info msg="Started container" PID=2191 containerID=fa88c55cdf73d97c9b1b595a91c933fa6f8aad6b1fd27d373cbe45d86a34cce6 description=kube-system/storage-provisioner/storage-provisioner id=9de0073d-5f53-4419-9a2c-633d73cc60a0 name=/runtime.v1.RuntimeService/StartContainer sandboxID=54069e3226287014186be63bcfa83708ace33b654b5da146627efdfd5c99a76d
	Dec 17 19:59:55 old-k8s-version-894575 crio[769]: time="2025-12-17T19:59:55.104541994Z" level=info msg="Started container" PID=2192 containerID=6d8e5d2ea52b91cafbe9b518b0267e7185ece3bacead20aa58b83699e06d3650 description=kube-system/coredns-5dd5756b68-gbhs5/coredns id=dfe198d9-d7f8-441c-8bc6-695e51e8e838 name=/runtime.v1.RuntimeService/StartContainer sandboxID=ac3e5721e2cfec52fa366bb04d643ef6a3f5ed13d21e15e1a5da1bad0d67a0b7
	Dec 17 19:59:57 old-k8s-version-894575 crio[769]: time="2025-12-17T19:59:57.852676389Z" level=info msg="Running pod sandbox: default/busybox/POD" id=83c1142c-c955-4af9-a8d8-77600c68a700 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 17 19:59:57 old-k8s-version-894575 crio[769]: time="2025-12-17T19:59:57.852775435Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 19:59:57 old-k8s-version-894575 crio[769]: time="2025-12-17T19:59:57.857831652Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:a58df948b8193acbe8c944f2576ae94b782543490b7324ee2d2f1067ddf71971 UID:333c14fc-c646-4706-87c1-b6301f91b20a NetNS:/var/run/netns/8188274a-548e-4da0-add4-55ef4ff4c0f7 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc0001ad2d8}] Aliases:map[]}"
	Dec 17 19:59:57 old-k8s-version-894575 crio[769]: time="2025-12-17T19:59:57.85786099Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Dec 17 19:59:57 old-k8s-version-894575 crio[769]: time="2025-12-17T19:59:57.867982797Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:a58df948b8193acbe8c944f2576ae94b782543490b7324ee2d2f1067ddf71971 UID:333c14fc-c646-4706-87c1-b6301f91b20a NetNS:/var/run/netns/8188274a-548e-4da0-add4-55ef4ff4c0f7 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc0001ad2d8}] Aliases:map[]}"
	Dec 17 19:59:57 old-k8s-version-894575 crio[769]: time="2025-12-17T19:59:57.868196923Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Dec 17 19:59:57 old-k8s-version-894575 crio[769]: time="2025-12-17T19:59:57.868992993Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 17 19:59:57 old-k8s-version-894575 crio[769]: time="2025-12-17T19:59:57.869808367Z" level=info msg="Ran pod sandbox a58df948b8193acbe8c944f2576ae94b782543490b7324ee2d2f1067ddf71971 with infra container: default/busybox/POD" id=83c1142c-c955-4af9-a8d8-77600c68a700 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 17 19:59:57 old-k8s-version-894575 crio[769]: time="2025-12-17T19:59:57.871020512Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=0b15c084-0718-4689-9e40-fabcbd53ae09 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 19:59:57 old-k8s-version-894575 crio[769]: time="2025-12-17T19:59:57.871164117Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=0b15c084-0718-4689-9e40-fabcbd53ae09 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 19:59:57 old-k8s-version-894575 crio[769]: time="2025-12-17T19:59:57.871218635Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=0b15c084-0718-4689-9e40-fabcbd53ae09 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 19:59:57 old-k8s-version-894575 crio[769]: time="2025-12-17T19:59:57.871745172Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=536a0c2d-15b8-4f9e-82a9-a20573f858df name=/runtime.v1.ImageService/PullImage
	Dec 17 19:59:57 old-k8s-version-894575 crio[769]: time="2025-12-17T19:59:57.873195849Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Dec 17 19:59:59 old-k8s-version-894575 crio[769]: time="2025-12-17T19:59:59.198025369Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=536a0c2d-15b8-4f9e-82a9-a20573f858df name=/runtime.v1.ImageService/PullImage
	Dec 17 19:59:59 old-k8s-version-894575 crio[769]: time="2025-12-17T19:59:59.198920558Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=0a1ce01a-0944-4c41-970b-29207a97a875 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 19:59:59 old-k8s-version-894575 crio[769]: time="2025-12-17T19:59:59.200533339Z" level=info msg="Creating container: default/busybox/busybox" id=026dc6bc-9956-40fd-b59e-e634c04b3607 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 17 19:59:59 old-k8s-version-894575 crio[769]: time="2025-12-17T19:59:59.200690739Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 19:59:59 old-k8s-version-894575 crio[769]: time="2025-12-17T19:59:59.204800556Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 19:59:59 old-k8s-version-894575 crio[769]: time="2025-12-17T19:59:59.20530515Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 19:59:59 old-k8s-version-894575 crio[769]: time="2025-12-17T19:59:59.232412986Z" level=info msg="Created container 8f9cc87bec61f132ee93f544cc9e88c0e6d4ef123917b985b0eae3743486268d: default/busybox/busybox" id=026dc6bc-9956-40fd-b59e-e634c04b3607 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 17 19:59:59 old-k8s-version-894575 crio[769]: time="2025-12-17T19:59:59.233479398Z" level=info msg="Starting container: 8f9cc87bec61f132ee93f544cc9e88c0e6d4ef123917b985b0eae3743486268d" id=b41d2326-1bef-4b62-ac34-847e2d8d7c2e name=/runtime.v1.RuntimeService/StartContainer
	Dec 17 19:59:59 old-k8s-version-894575 crio[769]: time="2025-12-17T19:59:59.235462813Z" level=info msg="Started container" PID=2269 containerID=8f9cc87bec61f132ee93f544cc9e88c0e6d4ef123917b985b0eae3743486268d description=default/busybox/busybox id=b41d2326-1bef-4b62-ac34-847e2d8d7c2e name=/runtime.v1.RuntimeService/StartContainer sandboxID=a58df948b8193acbe8c944f2576ae94b782543490b7324ee2d2f1067ddf71971
	Dec 17 20:00:05 old-k8s-version-894575 crio[769]: time="2025-12-17T20:00:05.634530095Z" level=error msg="Unhandled Error: unable to upgrade websocket connection: websocket server finished before becoming ready (logger=\"UnhandledError\")"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                              NAMESPACE
	8f9cc87bec61f       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   7 seconds ago       Running             busybox                   0                   a58df948b8193       busybox                                          default
	6d8e5d2ea52b9       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      11 seconds ago      Running             coredns                   0                   ac3e5721e2cfe       coredns-5dd5756b68-gbhs5                         kube-system
	fa88c55cdf73d       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      11 seconds ago      Running             storage-provisioner       0                   54069e3226287       storage-provisioner                              kube-system
	2ad95d72e2d06       docker.io/kindest/kindnetd@sha256:7c22558dc06a570d46ea6e8a73b23cdc754eb81f7c08d3441a3171ad359ffc27    22 seconds ago      Running             kindnet-cni               0                   08ce19fd3e4be       kindnet-p8d9f                                    kube-system
	d1bee48df1f4e       ea1030da44aa18666a7bf15fddd2a38c3143c3277159cb8bdd95f45c8ce62d7a                                      24 seconds ago      Running             kube-proxy                0                   2e0dfb6631d77       kube-proxy-bdzb6                                 kube-system
	c4876385b442b       4be79c38a4bab6e1252a35697500e8a0d9c5c7c771d9fcc1935c9a7f6cdf4c62                                      43 seconds ago      Running             kube-controller-manager   0                   9b6ba2e3673c7       kube-controller-manager-old-k8s-version-894575   kube-system
	49ac9c111d0f2       bb5e0dde9054c02d6badee88547be7e7bb7b7b818d277c8a61b4b29484bbff95                                      43 seconds ago      Running             kube-apiserver            0                   e0edaf78f7832       kube-apiserver-old-k8s-version-894575            kube-system
	8f2e7fd5800b4       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      43 seconds ago      Running             etcd                      0                   6cb2e3b5cd710       etcd-old-k8s-version-894575                      kube-system
	a2df48959c120       f6f496300a2ae7a6727ccf3080d66d2fd22b6cfc271df5351c976c23a28bb157                                      43 seconds ago      Running             kube-scheduler            0                   9015f70d79792       kube-scheduler-old-k8s-version-894575            kube-system
	
	
	==> coredns [6d8e5d2ea52b91cafbe9b518b0267e7185ece3bacead20aa58b83699e06d3650] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 8aa94104b4dae56b00431f7362ac05b997af2246775de35dc2eb361b0707b2fa7199f9ddfdba27fdef1331b76d09c41700f6cb5d00836dabab7c0df8e651283f
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:36391 - 55288 "HINFO IN 6476573940880978563.4953791443376219784. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.073773534s
	
	
	==> describe nodes <==
	Name:               old-k8s-version-894575
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-894575
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2e96f676eb7e96389e85fe0658a4ede4c4ba6924
	                    minikube.k8s.io/name=old-k8s-version-894575
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_17T19_59_29_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Dec 2025 19:59:25 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-894575
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Dec 2025 19:59:59 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Dec 2025 19:59:59 +0000   Wed, 17 Dec 2025 19:59:25 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Dec 2025 19:59:59 +0000   Wed, 17 Dec 2025 19:59:25 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Dec 2025 19:59:59 +0000   Wed, 17 Dec 2025 19:59:25 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Dec 2025 19:59:59 +0000   Wed, 17 Dec 2025 19:59:54 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    old-k8s-version-894575
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 99cc213c06a11cdf07b2a4d26942818a
	  System UUID:                f9507002-721b-4e21-9c9c-8a3faf234561
	  Boot ID:                    832664c8-407a-4bff-a432-3bbc3f20421e
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         10s
	  kube-system                 coredns-5dd5756b68-gbhs5                          100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     25s
	  kube-system                 etcd-old-k8s-version-894575                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         39s
	  kube-system                 kindnet-p8d9f                                     100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      25s
	  kube-system                 kube-apiserver-old-k8s-version-894575             250m (3%)     0 (0%)      0 (0%)           0 (0%)         39s
	  kube-system                 kube-controller-manager-old-k8s-version-894575    200m (2%)     0 (0%)      0 (0%)           0 (0%)         41s
	  kube-system                 kube-proxy-bdzb6                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         25s
	  kube-system                 kube-scheduler-old-k8s-version-894575             100m (1%)     0 (0%)      0 (0%)           0 (0%)         39s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         24s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 24s                kube-proxy       
	  Normal  Starting                 46s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  45s (x8 over 46s)  kubelet          Node old-k8s-version-894575 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    45s (x8 over 46s)  kubelet          Node old-k8s-version-894575 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     45s (x8 over 46s)  kubelet          Node old-k8s-version-894575 status is now: NodeHasSufficientPID
	  Normal  Starting                 39s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  39s                kubelet          Node old-k8s-version-894575 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    39s                kubelet          Node old-k8s-version-894575 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     39s                kubelet          Node old-k8s-version-894575 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           26s                node-controller  Node old-k8s-version-894575 event: Registered Node old-k8s-version-894575 in Controller
	  Normal  NodeReady                13s                kubelet          Node old-k8s-version-894575 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 02 bf cf fd 8a f3 08 06
	[  +0.000372] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 46 d7 50 f9 50 96 08 06
	[Dec17 19:26] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000011] ll header: 00000000: 12 b8 6e 1b fb 93 de a2 46 23 bd 1e 08 00
	[  +1.015318] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 12 b8 6e 1b fb 93 de a2 46 23 bd 1e 08 00
	[  +1.023837] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 12 b8 6e 1b fb 93 de a2 46 23 bd 1e 08 00
	[  +1.023872] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 12 b8 6e 1b fb 93 de a2 46 23 bd 1e 08 00
	[  +1.023881] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 12 b8 6e 1b fb 93 de a2 46 23 bd 1e 08 00
	[  +1.023899] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 12 b8 6e 1b fb 93 de a2 46 23 bd 1e 08 00
	[  +2.047807] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: 12 b8 6e 1b fb 93 de a2 46 23 bd 1e 08 00
	[  +4.031540] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000022] ll header: 00000000: 12 b8 6e 1b fb 93 de a2 46 23 bd 1e 08 00
	[  +8.319118] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: 12 b8 6e 1b fb 93 de a2 46 23 bd 1e 08 00
	[ +16.382218] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 12 b8 6e 1b fb 93 de a2 46 23 bd 1e 08 00
	[Dec17 19:27] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 12 b8 6e 1b fb 93 de a2 46 23 bd 1e 08 00
	
	
	==> etcd [8f2e7fd5800b4a8a9afea3230686bedd06820288f94b3cf75b7a0aaa6e846d0f] <==
	{"level":"info","ts":"2025-12-17T19:59:23.419668Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed switched to configuration voters=(11459225503572592365)"}
	{"level":"info","ts":"2025-12-17T19:59:23.419777Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","added-peer-id":"9f0758e1c58a86ed","added-peer-peer-urls":["https://192.168.85.2:2380"]}
	{"level":"info","ts":"2025-12-17T19:59:23.421804Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-12-17T19:59:23.422107Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"9f0758e1c58a86ed","initial-advertise-peer-urls":["https://192.168.85.2:2380"],"listen-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.85.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-12-17T19:59:23.42218Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-12-17T19:59:23.422308Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-12-17T19:59:23.422353Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-12-17T19:59:24.308832Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed is starting a new election at term 1"}
	{"level":"info","ts":"2025-12-17T19:59:24.308916Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became pre-candidate at term 1"}
	{"level":"info","ts":"2025-12-17T19:59:24.308947Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgPreVoteResp from 9f0758e1c58a86ed at term 1"}
	{"level":"info","ts":"2025-12-17T19:59:24.308976Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became candidate at term 2"}
	{"level":"info","ts":"2025-12-17T19:59:24.308983Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2025-12-17T19:59:24.308994Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became leader at term 2"}
	{"level":"info","ts":"2025-12-17T19:59:24.309003Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2025-12-17T19:59:24.310114Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"9f0758e1c58a86ed","local-member-attributes":"{Name:old-k8s-version-894575 ClientURLs:[https://192.168.85.2:2379]}","request-path":"/0/members/9f0758e1c58a86ed/attributes","cluster-id":"68eaea490fab4e05","publish-timeout":"7s"}
	{"level":"info","ts":"2025-12-17T19:59:24.310149Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-17T19:59:24.310207Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-17T19:59:24.310196Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2025-12-17T19:59:24.310334Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-12-17T19:59:24.310354Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-12-17T19:59:24.310907Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","cluster-version":"3.5"}
	{"level":"info","ts":"2025-12-17T19:59:24.311031Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-12-17T19:59:24.311236Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2025-12-17T19:59:24.311804Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-12-17T19:59:24.31184Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.85.2:2379"}
	
	
	==> kernel <==
	 20:00:07 up  1:42,  0 user,  load average: 2.41, 2.98, 2.17
	Linux old-k8s-version-894575 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [2ad95d72e2d068005bedced6bd343bf06e096e251cceeeaa70ea843a2bdac248] <==
	I1217 19:59:44.397585       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1217 19:59:44.397890       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1217 19:59:44.398071       1 main.go:148] setting mtu 1500 for CNI 
	I1217 19:59:44.398116       1 main.go:178] kindnetd IP family: "ipv4"
	I1217 19:59:44.398137       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-17T19:59:44Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1217 19:59:44.694929       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1217 19:59:44.695003       1 controller.go:381] "Waiting for informer caches to sync"
	I1217 19:59:44.695031       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1217 19:59:44.696251       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1217 19:59:44.992981       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1217 19:59:44.993114       1 metrics.go:72] Registering metrics
	I1217 19:59:44.993213       1 controller.go:711] "Syncing nftables rules"
	I1217 19:59:54.695339       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1217 19:59:54.695404       1 main.go:301] handling current node
	I1217 20:00:04.696327       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1217 20:00:04.696373       1 main.go:301] handling current node
	
	
	==> kube-apiserver [49ac9c111d0f22ab5c2f1e0d87155b4ac425e8c2e6f010b3f9eafa2d7ba66ff6] <==
	I1217 19:59:25.520974       1 shared_informer.go:318] Caches are synced for configmaps
	I1217 19:59:25.521058       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1217 19:59:25.521088       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1217 19:59:25.521945       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1217 19:59:25.521975       1 aggregator.go:166] initial CRD sync complete...
	I1217 19:59:25.521985       1 autoregister_controller.go:141] Starting autoregister controller
	I1217 19:59:25.521990       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1217 19:59:25.521997       1 cache.go:39] Caches are synced for autoregister controller
	I1217 19:59:25.522352       1 controller.go:624] quota admission added evaluator for: namespaces
	I1217 19:59:25.716517       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1217 19:59:26.426172       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1217 19:59:26.430158       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1217 19:59:26.430175       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1217 19:59:26.834523       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1217 19:59:26.868383       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1217 19:59:26.930411       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1217 19:59:26.936137       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1217 19:59:26.937032       1 controller.go:624] quota admission added evaluator for: endpoints
	I1217 19:59:26.941001       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1217 19:59:27.483350       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1217 19:59:28.323909       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1217 19:59:28.335535       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1217 19:59:28.346981       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1217 19:59:42.252137       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1217 19:59:42.403548       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [c4876385b442b5865426d2903fdbb4c8d5adec0ace7b54cd008a051a623678d6] <==
	I1217 19:59:41.750896       1 shared_informer.go:318] Caches are synced for endpoint_slice
	I1217 19:59:41.752173       1 shared_informer.go:318] Caches are synced for resource quota
	I1217 19:59:42.071865       1 shared_informer.go:318] Caches are synced for garbage collector
	I1217 19:59:42.148585       1 shared_informer.go:318] Caches are synced for garbage collector
	I1217 19:59:42.148626       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1217 19:59:42.255492       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-5dd5756b68 to 2"
	I1217 19:59:42.412306       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-bdzb6"
	I1217 19:59:42.413483       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-p8d9f"
	I1217 19:59:42.559709       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-cttqj"
	I1217 19:59:42.566991       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-gbhs5"
	I1217 19:59:42.578495       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="323.045337ms"
	I1217 19:59:42.587007       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="8.43715ms"
	I1217 19:59:42.587147       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="90.767µs"
	I1217 19:59:42.587513       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="85.884µs"
	I1217 19:59:42.823782       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-5dd5756b68 to 1 from 2"
	I1217 19:59:42.834588       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-5dd5756b68-cttqj"
	I1217 19:59:42.842459       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="19.270188ms"
	I1217 19:59:42.855331       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="12.812114ms"
	I1217 19:59:42.855473       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="87.861µs"
	I1217 19:59:54.748195       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="165.406µs"
	I1217 19:59:54.765049       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="162.707µs"
	I1217 19:59:55.506744       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="58.271µs"
	I1217 19:59:55.532689       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="6.54797ms"
	I1217 19:59:55.532790       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="59.961µs"
	I1217 19:59:56.550819       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	
	
	==> kube-proxy [d1bee48df1f4e1deed13db75c40872f2a5393a01a8f3c993842bf4a982311aa6] <==
	I1217 19:59:42.845510       1 server_others.go:69] "Using iptables proxy"
	I1217 19:59:42.858154       1 node.go:141] Successfully retrieved node IP: 192.168.85.2
	I1217 19:59:42.881911       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1217 19:59:42.884564       1 server_others.go:152] "Using iptables Proxier"
	I1217 19:59:42.884614       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1217 19:59:42.884622       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1217 19:59:42.884656       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1217 19:59:42.884924       1 server.go:846] "Version info" version="v1.28.0"
	I1217 19:59:42.884945       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1217 19:59:42.885540       1 config.go:188] "Starting service config controller"
	I1217 19:59:42.885580       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1217 19:59:42.885926       1 config.go:97] "Starting endpoint slice config controller"
	I1217 19:59:42.885947       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1217 19:59:42.886803       1 config.go:315] "Starting node config controller"
	I1217 19:59:42.886871       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1217 19:59:42.985933       1 shared_informer.go:318] Caches are synced for service config
	I1217 19:59:42.987017       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1217 19:59:42.987296       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [a2df48959c120350d85b8f7a8018efff6ec0e24c752b0cf6114a32e6362f0d95] <==
	W1217 19:59:25.490834       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1217 19:59:25.490910       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1217 19:59:25.490925       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1217 19:59:25.490928       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1217 19:59:25.491024       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1217 19:59:25.491043       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1217 19:59:25.491138       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1217 19:59:25.491164       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1217 19:59:25.491470       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1217 19:59:25.491485       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1217 19:59:25.491498       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1217 19:59:25.491506       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1217 19:59:26.395233       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1217 19:59:26.395265       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W1217 19:59:26.402801       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1217 19:59:26.402835       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1217 19:59:26.412546       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1217 19:59:26.412586       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1217 19:59:26.441125       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1217 19:59:26.441163       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1217 19:59:26.523256       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1217 19:59:26.523284       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1217 19:59:26.690479       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1217 19:59:26.690513       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I1217 19:59:29.487071       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Dec 17 19:59:41 old-k8s-version-894575 kubelet[1403]: I1217 19:59:41.529297    1403 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Dec 17 19:59:42 old-k8s-version-894575 kubelet[1403]: I1217 19:59:42.418010    1403 topology_manager.go:215] "Topology Admit Handler" podUID="6c886a0f-40d4-4f9a-a23e-e3d966a937cd" podNamespace="kube-system" podName="kube-proxy-bdzb6"
	Dec 17 19:59:42 old-k8s-version-894575 kubelet[1403]: I1217 19:59:42.424854    1403 topology_manager.go:215] "Topology Admit Handler" podUID="73923d5d-ed13-4b01-ad91-71ed716cbd2b" podNamespace="kube-system" podName="kindnet-p8d9f"
	Dec 17 19:59:42 old-k8s-version-894575 kubelet[1403]: I1217 19:59:42.591067    1403 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6c886a0f-40d4-4f9a-a23e-e3d966a937cd-xtables-lock\") pod \"kube-proxy-bdzb6\" (UID: \"6c886a0f-40d4-4f9a-a23e-e3d966a937cd\") " pod="kube-system/kube-proxy-bdzb6"
	Dec 17 19:59:42 old-k8s-version-894575 kubelet[1403]: I1217 19:59:42.591158    1403 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/73923d5d-ed13-4b01-ad91-71ed716cbd2b-cni-cfg\") pod \"kindnet-p8d9f\" (UID: \"73923d5d-ed13-4b01-ad91-71ed716cbd2b\") " pod="kube-system/kindnet-p8d9f"
	Dec 17 19:59:42 old-k8s-version-894575 kubelet[1403]: I1217 19:59:42.591188    1403 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/73923d5d-ed13-4b01-ad91-71ed716cbd2b-xtables-lock\") pod \"kindnet-p8d9f\" (UID: \"73923d5d-ed13-4b01-ad91-71ed716cbd2b\") " pod="kube-system/kindnet-p8d9f"
	Dec 17 19:59:42 old-k8s-version-894575 kubelet[1403]: I1217 19:59:42.591216    1403 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/73923d5d-ed13-4b01-ad91-71ed716cbd2b-lib-modules\") pod \"kindnet-p8d9f\" (UID: \"73923d5d-ed13-4b01-ad91-71ed716cbd2b\") " pod="kube-system/kindnet-p8d9f"
	Dec 17 19:59:42 old-k8s-version-894575 kubelet[1403]: I1217 19:59:42.591268    1403 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/6c886a0f-40d4-4f9a-a23e-e3d966a937cd-kube-proxy\") pod \"kube-proxy-bdzb6\" (UID: \"6c886a0f-40d4-4f9a-a23e-e3d966a937cd\") " pod="kube-system/kube-proxy-bdzb6"
	Dec 17 19:59:42 old-k8s-version-894575 kubelet[1403]: I1217 19:59:42.591302    1403 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6c886a0f-40d4-4f9a-a23e-e3d966a937cd-lib-modules\") pod \"kube-proxy-bdzb6\" (UID: \"6c886a0f-40d4-4f9a-a23e-e3d966a937cd\") " pod="kube-system/kube-proxy-bdzb6"
	Dec 17 19:59:42 old-k8s-version-894575 kubelet[1403]: I1217 19:59:42.591334    1403 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kzptw\" (UniqueName: \"kubernetes.io/projected/6c886a0f-40d4-4f9a-a23e-e3d966a937cd-kube-api-access-kzptw\") pod \"kube-proxy-bdzb6\" (UID: \"6c886a0f-40d4-4f9a-a23e-e3d966a937cd\") " pod="kube-system/kube-proxy-bdzb6"
	Dec 17 19:59:42 old-k8s-version-894575 kubelet[1403]: I1217 19:59:42.591369    1403 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8ms2b\" (UniqueName: \"kubernetes.io/projected/73923d5d-ed13-4b01-ad91-71ed716cbd2b-kube-api-access-8ms2b\") pod \"kindnet-p8d9f\" (UID: \"73923d5d-ed13-4b01-ad91-71ed716cbd2b\") " pod="kube-system/kindnet-p8d9f"
	Dec 17 19:59:43 old-k8s-version-894575 kubelet[1403]: I1217 19:59:43.484263    1403 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-bdzb6" podStartSLOduration=1.484204104 podCreationTimestamp="2025-12-17 19:59:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-17 19:59:43.483811617 +0000 UTC m=+15.184473170" watchObservedRunningTime="2025-12-17 19:59:43.484204104 +0000 UTC m=+15.184865642"
	Dec 17 19:59:54 old-k8s-version-894575 kubelet[1403]: I1217 19:59:54.725938    1403 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Dec 17 19:59:54 old-k8s-version-894575 kubelet[1403]: I1217 19:59:54.748132    1403 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-p8d9f" podStartSLOduration=11.306969523 podCreationTimestamp="2025-12-17 19:59:42 +0000 UTC" firstStartedPulling="2025-12-17 19:59:42.735651203 +0000 UTC m=+14.436312736" lastFinishedPulling="2025-12-17 19:59:44.176728522 +0000 UTC m=+15.877390051" observedRunningTime="2025-12-17 19:59:44.483619101 +0000 UTC m=+16.184280639" watchObservedRunningTime="2025-12-17 19:59:54.748046838 +0000 UTC m=+26.448708376"
	Dec 17 19:59:54 old-k8s-version-894575 kubelet[1403]: I1217 19:59:54.748435    1403 topology_manager.go:215] "Topology Admit Handler" podUID="0e722d4c-f50c-4835-b78b-bd7a203e9014" podNamespace="kube-system" podName="storage-provisioner"
	Dec 17 19:59:54 old-k8s-version-894575 kubelet[1403]: I1217 19:59:54.748613    1403 topology_manager.go:215] "Topology Admit Handler" podUID="d30f3f85-9002-4cf4-b827-6bb0dfd90bd4" podNamespace="kube-system" podName="coredns-5dd5756b68-gbhs5"
	Dec 17 19:59:54 old-k8s-version-894575 kubelet[1403]: I1217 19:59:54.881993    1403 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b84wd\" (UniqueName: \"kubernetes.io/projected/0e722d4c-f50c-4835-b78b-bd7a203e9014-kube-api-access-b84wd\") pod \"storage-provisioner\" (UID: \"0e722d4c-f50c-4835-b78b-bd7a203e9014\") " pod="kube-system/storage-provisioner"
	Dec 17 19:59:54 old-k8s-version-894575 kubelet[1403]: I1217 19:59:54.882067    1403 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d30f3f85-9002-4cf4-b827-6bb0dfd90bd4-config-volume\") pod \"coredns-5dd5756b68-gbhs5\" (UID: \"d30f3f85-9002-4cf4-b827-6bb0dfd90bd4\") " pod="kube-system/coredns-5dd5756b68-gbhs5"
	Dec 17 19:59:54 old-k8s-version-894575 kubelet[1403]: I1217 19:59:54.882265    1403 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6h4x8\" (UniqueName: \"kubernetes.io/projected/d30f3f85-9002-4cf4-b827-6bb0dfd90bd4-kube-api-access-6h4x8\") pod \"coredns-5dd5756b68-gbhs5\" (UID: \"d30f3f85-9002-4cf4-b827-6bb0dfd90bd4\") " pod="kube-system/coredns-5dd5756b68-gbhs5"
	Dec 17 19:59:54 old-k8s-version-894575 kubelet[1403]: I1217 19:59:54.882323    1403 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/0e722d4c-f50c-4835-b78b-bd7a203e9014-tmp\") pod \"storage-provisioner\" (UID: \"0e722d4c-f50c-4835-b78b-bd7a203e9014\") " pod="kube-system/storage-provisioner"
	Dec 17 19:59:55 old-k8s-version-894575 kubelet[1403]: I1217 19:59:55.506470    1403 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-gbhs5" podStartSLOduration=13.50642026 podCreationTimestamp="2025-12-17 19:59:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-17 19:59:55.50632316 +0000 UTC m=+27.206984698" watchObservedRunningTime="2025-12-17 19:59:55.50642026 +0000 UTC m=+27.207081798"
	Dec 17 19:59:55 old-k8s-version-894575 kubelet[1403]: I1217 19:59:55.516676    1403 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=12.516616714 podCreationTimestamp="2025-12-17 19:59:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-17 19:59:55.516343734 +0000 UTC m=+27.217005272" watchObservedRunningTime="2025-12-17 19:59:55.516616714 +0000 UTC m=+27.217278572"
	Dec 17 19:59:57 old-k8s-version-894575 kubelet[1403]: I1217 19:59:57.550600    1403 topology_manager.go:215] "Topology Admit Handler" podUID="333c14fc-c646-4706-87c1-b6301f91b20a" podNamespace="default" podName="busybox"
	Dec 17 19:59:57 old-k8s-version-894575 kubelet[1403]: I1217 19:59:57.702381    1403 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c657z\" (UniqueName: \"kubernetes.io/projected/333c14fc-c646-4706-87c1-b6301f91b20a-kube-api-access-c657z\") pod \"busybox\" (UID: \"333c14fc-c646-4706-87c1-b6301f91b20a\") " pod="default/busybox"
	Dec 17 19:59:59 old-k8s-version-894575 kubelet[1403]: I1217 19:59:59.519072    1403 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=1.192053165 podCreationTimestamp="2025-12-17 19:59:57 +0000 UTC" firstStartedPulling="2025-12-17 19:59:57.87141561 +0000 UTC m=+29.572077140" lastFinishedPulling="2025-12-17 19:59:59.198377021 +0000 UTC m=+30.899038543" observedRunningTime="2025-12-17 19:59:59.518802154 +0000 UTC m=+31.219463696" watchObservedRunningTime="2025-12-17 19:59:59.519014568 +0000 UTC m=+31.219676105"
	
	
	==> storage-provisioner [fa88c55cdf73d97c9b1b595a91c933fa6f8aad6b1fd27d373cbe45d86a34cce6] <==
	I1217 19:59:55.119364       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1217 19:59:55.128361       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1217 19:59:55.128501       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1217 19:59:55.135964       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1217 19:59:55.136144       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-894575_d2b25975-fd5d-4a40-962f-26f5269a1980!
	I1217 19:59:55.136160       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"443e0966-e91f-456b-b43e-a7e2d61f2da7", APIVersion:"v1", ResourceVersion:"429", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-894575_d2b25975-fd5d-4a40-962f-26f5269a1980 became leader
	I1217 19:59:55.237356       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-894575_d2b25975-fd5d-4a40-962f-26f5269a1980!
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-894575 -n old-k8s-version-894575
helpers_test.go:270: (dbg) Run:  kubectl --context old-k8s-version-894575 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (2.37s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (6.76s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-832842 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p no-preload-832842 --alsologtostderr -v=1: exit status 80 (2.540976865s)

                                                
                                                
-- stdout --
	* Pausing node no-preload-832842 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1217 20:01:20.817291  636438 out.go:360] Setting OutFile to fd 1 ...
	I1217 20:01:20.817398  636438 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 20:01:20.817407  636438 out.go:374] Setting ErrFile to fd 2...
	I1217 20:01:20.817411  636438 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 20:01:20.817628  636438 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22186-372245/.minikube/bin
	I1217 20:01:20.817856  636438 out.go:368] Setting JSON to false
	I1217 20:01:20.817878  636438 mustload.go:66] Loading cluster: no-preload-832842
	I1217 20:01:20.818267  636438 config.go:182] Loaded profile config "no-preload-832842": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1217 20:01:20.818653  636438 cli_runner.go:164] Run: docker container inspect no-preload-832842 --format={{.State.Status}}
	I1217 20:01:20.837039  636438 host.go:66] Checking if "no-preload-832842" exists ...
	I1217 20:01:20.837364  636438 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 20:01:20.893877  636438 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:78 OomKillDisable:false NGoroutines:84 SystemTime:2025-12-17 20:01:20.883863007 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1217 20:01:20.894560  636438 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/22186/minikube-v1.37.0-1765965980-22186-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1765965980-22186/minikube-v1.37.0-1765965980-22186-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1765965980-22186-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:no-preload-832842 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true)
wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1217 20:01:20.896501  636438 out.go:179] * Pausing node no-preload-832842 ... 
	I1217 20:01:20.897729  636438 host.go:66] Checking if "no-preload-832842" exists ...
	I1217 20:01:20.898018  636438 ssh_runner.go:195] Run: systemctl --version
	I1217 20:01:20.898109  636438 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-832842
	I1217 20:01:20.915881  636438 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33443 SSHKeyPath:/home/jenkins/minikube-integration/22186-372245/.minikube/machines/no-preload-832842/id_rsa Username:docker}
	I1217 20:01:21.017512  636438 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 20:01:21.040863  636438 pause.go:52] kubelet running: true
	I1217 20:01:21.040939  636438 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1217 20:01:21.214278  636438 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1217 20:01:21.214378  636438 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1217 20:01:21.286925  636438 cri.go:89] found id: "d71ed695baa767c1509bc38e05b709bad367861f9b3be89d656fd64d0ea54137"
	I1217 20:01:21.286956  636438 cri.go:89] found id: "df79f3414f09421efcb91bbc4abcc73e07bf62fc320f79ed6c541180aa4945ab"
	I1217 20:01:21.286962  636438 cri.go:89] found id: "74a2be0dba394331147af1f7139cc8715764693116a735ed916bd4c8ee2fd3bf"
	I1217 20:01:21.286967  636438 cri.go:89] found id: "574a5ed6453441e6d8a97097093213b4144a910e98bd02d4b28191ce5e459144"
	I1217 20:01:21.286971  636438 cri.go:89] found id: "6dc1bf580a5e5d88fdf2f6bbe5d1905fb56db30030d094660f124897fd457658"
	I1217 20:01:21.286977  636438 cri.go:89] found id: "aa0f70514b3b3987679fa08562d6a29d0cde6f41668ff6920603c0af90405bbe"
	I1217 20:01:21.286981  636438 cri.go:89] found id: "3c8014a76c7ede91c3cd5009249d11a432295b5b5abd84d90df0cea58173d3dd"
	I1217 20:01:21.286985  636438 cri.go:89] found id: "93adc4b861b7c2cb084b258ba073a7308743dab281018c38f60ca99fa8a8c8eb"
	I1217 20:01:21.286989  636438 cri.go:89] found id: "fc98dcbd3e923feb9befb5e08f3923050cddcdcd6ec0dde8a4a828548f21afbc"
	I1217 20:01:21.287001  636438 cri.go:89] found id: "c35ae1f5685d7eb989e5e2ae71d012fc2d94fb19e3073568b71a6676af20d337"
	I1217 20:01:21.287005  636438 cri.go:89] found id: "55c1a97eef28cd0406e0d4aef3df5a460e2bc3114b4471c21d47e187a026216d"
	I1217 20:01:21.287008  636438 cri.go:89] found id: ""
	I1217 20:01:21.287070  636438 ssh_runner.go:195] Run: sudo runc list -f json
	I1217 20:01:21.299382  636438 retry.go:31] will retry after 273.541071ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T20:01:21Z" level=error msg="open /run/runc: no such file or directory"
	I1217 20:01:21.573917  636438 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 20:01:21.587843  636438 pause.go:52] kubelet running: false
	I1217 20:01:21.587910  636438 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1217 20:01:21.734238  636438 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1217 20:01:21.734366  636438 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1217 20:01:21.805494  636438 cri.go:89] found id: "d71ed695baa767c1509bc38e05b709bad367861f9b3be89d656fd64d0ea54137"
	I1217 20:01:21.805517  636438 cri.go:89] found id: "df79f3414f09421efcb91bbc4abcc73e07bf62fc320f79ed6c541180aa4945ab"
	I1217 20:01:21.805523  636438 cri.go:89] found id: "74a2be0dba394331147af1f7139cc8715764693116a735ed916bd4c8ee2fd3bf"
	I1217 20:01:21.805529  636438 cri.go:89] found id: "574a5ed6453441e6d8a97097093213b4144a910e98bd02d4b28191ce5e459144"
	I1217 20:01:21.805534  636438 cri.go:89] found id: "6dc1bf580a5e5d88fdf2f6bbe5d1905fb56db30030d094660f124897fd457658"
	I1217 20:01:21.805540  636438 cri.go:89] found id: "aa0f70514b3b3987679fa08562d6a29d0cde6f41668ff6920603c0af90405bbe"
	I1217 20:01:21.805544  636438 cri.go:89] found id: "3c8014a76c7ede91c3cd5009249d11a432295b5b5abd84d90df0cea58173d3dd"
	I1217 20:01:21.805549  636438 cri.go:89] found id: "93adc4b861b7c2cb084b258ba073a7308743dab281018c38f60ca99fa8a8c8eb"
	I1217 20:01:21.805553  636438 cri.go:89] found id: "fc98dcbd3e923feb9befb5e08f3923050cddcdcd6ec0dde8a4a828548f21afbc"
	I1217 20:01:21.805573  636438 cri.go:89] found id: "c35ae1f5685d7eb989e5e2ae71d012fc2d94fb19e3073568b71a6676af20d337"
	I1217 20:01:21.805581  636438 cri.go:89] found id: "55c1a97eef28cd0406e0d4aef3df5a460e2bc3114b4471c21d47e187a026216d"
	I1217 20:01:21.805586  636438 cri.go:89] found id: ""
	I1217 20:01:21.805629  636438 ssh_runner.go:195] Run: sudo runc list -f json
	I1217 20:01:21.818650  636438 retry.go:31] will retry after 311.01187ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T20:01:21Z" level=error msg="open /run/runc: no such file or directory"
	I1217 20:01:22.130308  636438 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 20:01:22.144617  636438 pause.go:52] kubelet running: false
	I1217 20:01:22.144675  636438 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1217 20:01:22.288728  636438 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1217 20:01:22.288827  636438 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1217 20:01:22.358555  636438 cri.go:89] found id: "d71ed695baa767c1509bc38e05b709bad367861f9b3be89d656fd64d0ea54137"
	I1217 20:01:22.358578  636438 cri.go:89] found id: "df79f3414f09421efcb91bbc4abcc73e07bf62fc320f79ed6c541180aa4945ab"
	I1217 20:01:22.358584  636438 cri.go:89] found id: "74a2be0dba394331147af1f7139cc8715764693116a735ed916bd4c8ee2fd3bf"
	I1217 20:01:22.358590  636438 cri.go:89] found id: "574a5ed6453441e6d8a97097093213b4144a910e98bd02d4b28191ce5e459144"
	I1217 20:01:22.358594  636438 cri.go:89] found id: "6dc1bf580a5e5d88fdf2f6bbe5d1905fb56db30030d094660f124897fd457658"
	I1217 20:01:22.358602  636438 cri.go:89] found id: "aa0f70514b3b3987679fa08562d6a29d0cde6f41668ff6920603c0af90405bbe"
	I1217 20:01:22.358607  636438 cri.go:89] found id: "3c8014a76c7ede91c3cd5009249d11a432295b5b5abd84d90df0cea58173d3dd"
	I1217 20:01:22.358611  636438 cri.go:89] found id: "93adc4b861b7c2cb084b258ba073a7308743dab281018c38f60ca99fa8a8c8eb"
	I1217 20:01:22.358615  636438 cri.go:89] found id: "fc98dcbd3e923feb9befb5e08f3923050cddcdcd6ec0dde8a4a828548f21afbc"
	I1217 20:01:22.358630  636438 cri.go:89] found id: "c35ae1f5685d7eb989e5e2ae71d012fc2d94fb19e3073568b71a6676af20d337"
	I1217 20:01:22.358636  636438 cri.go:89] found id: "55c1a97eef28cd0406e0d4aef3df5a460e2bc3114b4471c21d47e187a026216d"
	I1217 20:01:22.358639  636438 cri.go:89] found id: ""
	I1217 20:01:22.358681  636438 ssh_runner.go:195] Run: sudo runc list -f json
	I1217 20:01:22.373253  636438 retry.go:31] will retry after 626.103108ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T20:01:22Z" level=error msg="open /run/runc: no such file or directory"
	I1217 20:01:22.999627  636438 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 20:01:23.018637  636438 pause.go:52] kubelet running: false
	I1217 20:01:23.018714  636438 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1217 20:01:23.190197  636438 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1217 20:01:23.190298  636438 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1217 20:01:23.269788  636438 cri.go:89] found id: "d71ed695baa767c1509bc38e05b709bad367861f9b3be89d656fd64d0ea54137"
	I1217 20:01:23.269816  636438 cri.go:89] found id: "df79f3414f09421efcb91bbc4abcc73e07bf62fc320f79ed6c541180aa4945ab"
	I1217 20:01:23.269824  636438 cri.go:89] found id: "74a2be0dba394331147af1f7139cc8715764693116a735ed916bd4c8ee2fd3bf"
	I1217 20:01:23.269829  636438 cri.go:89] found id: "574a5ed6453441e6d8a97097093213b4144a910e98bd02d4b28191ce5e459144"
	I1217 20:01:23.269834  636438 cri.go:89] found id: "6dc1bf580a5e5d88fdf2f6bbe5d1905fb56db30030d094660f124897fd457658"
	I1217 20:01:23.269840  636438 cri.go:89] found id: "aa0f70514b3b3987679fa08562d6a29d0cde6f41668ff6920603c0af90405bbe"
	I1217 20:01:23.269845  636438 cri.go:89] found id: "3c8014a76c7ede91c3cd5009249d11a432295b5b5abd84d90df0cea58173d3dd"
	I1217 20:01:23.269864  636438 cri.go:89] found id: "93adc4b861b7c2cb084b258ba073a7308743dab281018c38f60ca99fa8a8c8eb"
	I1217 20:01:23.269869  636438 cri.go:89] found id: "fc98dcbd3e923feb9befb5e08f3923050cddcdcd6ec0dde8a4a828548f21afbc"
	I1217 20:01:23.269884  636438 cri.go:89] found id: "c35ae1f5685d7eb989e5e2ae71d012fc2d94fb19e3073568b71a6676af20d337"
	I1217 20:01:23.269888  636438 cri.go:89] found id: "55c1a97eef28cd0406e0d4aef3df5a460e2bc3114b4471c21d47e187a026216d"
	I1217 20:01:23.269893  636438 cri.go:89] found id: ""
	I1217 20:01:23.269942  636438 ssh_runner.go:195] Run: sudo runc list -f json
	I1217 20:01:23.285450  636438 out.go:203] 
	W1217 20:01:23.286630  636438 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T20:01:23Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T20:01:23Z" level=error msg="open /run/runc: no such file or directory"
	
	W1217 20:01:23.286651  636438 out.go:285] * 
	* 
	W1217 20:01:23.291588  636438 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1217 20:01:23.292800  636438 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p no-preload-832842 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect no-preload-832842
helpers_test.go:244: (dbg) docker inspect no-preload-832842:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "dc205de21d84136a9158f48e22680e3a6dbeb7058d8f7cb8a1ec42b2ab7078c4",
	        "Created": "2025-12-17T19:59:10.833809324Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 624671,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-17T20:00:20.645613398Z",
	            "FinishedAt": "2025-12-17T20:00:19.733406734Z"
	        },
	        "Image": "sha256:e3abeb065413b7566dd42e98e204ab3ad174790743f1f5cd427036c11b49d7f1",
	        "ResolvConfPath": "/var/lib/docker/containers/dc205de21d84136a9158f48e22680e3a6dbeb7058d8f7cb8a1ec42b2ab7078c4/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/dc205de21d84136a9158f48e22680e3a6dbeb7058d8f7cb8a1ec42b2ab7078c4/hostname",
	        "HostsPath": "/var/lib/docker/containers/dc205de21d84136a9158f48e22680e3a6dbeb7058d8f7cb8a1ec42b2ab7078c4/hosts",
	        "LogPath": "/var/lib/docker/containers/dc205de21d84136a9158f48e22680e3a6dbeb7058d8f7cb8a1ec42b2ab7078c4/dc205de21d84136a9158f48e22680e3a6dbeb7058d8f7cb8a1ec42b2ab7078c4-json.log",
	        "Name": "/no-preload-832842",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-832842:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "no-preload-832842",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "dc205de21d84136a9158f48e22680e3a6dbeb7058d8f7cb8a1ec42b2ab7078c4",
	                "LowerDir": "/var/lib/docker/overlay2/ebb0d0f911a75643e43d20c434d6ce8701dfed1b02452ca7b47f96286ae91c9a-init/diff:/var/lib/docker/overlay2/29727d664a8119dcd8d22d923cfdfa7d86f99088879bf2a113d907b51116eb38/diff",
	                "MergedDir": "/var/lib/docker/overlay2/ebb0d0f911a75643e43d20c434d6ce8701dfed1b02452ca7b47f96286ae91c9a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/ebb0d0f911a75643e43d20c434d6ce8701dfed1b02452ca7b47f96286ae91c9a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/ebb0d0f911a75643e43d20c434d6ce8701dfed1b02452ca7b47f96286ae91c9a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-832842",
	                "Source": "/var/lib/docker/volumes/no-preload-832842/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-832842",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-832842",
	                "name.minikube.sigs.k8s.io": "no-preload-832842",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "56b9b7028a6e31debffefaa714d520e79fd4d737efec11c3d53f4106876c3114",
	            "SandboxKey": "/var/run/docker/netns/56b9b7028a6e",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33443"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33444"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33447"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33445"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33446"
	                    }
	                ]
	            },
	            "Networks": {
	                "no-preload-832842": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "a19db78cafed3da0943e15828af72c0aafbad853d47090363f5479ad475afe12",
	                    "EndpointID": "0d958ef65dfadfc20a86edb80bd98067acf45048f3df96dbf481fce6467d7328",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "MacAddress": "3a:9d:68:e2:a8:87",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-832842",
	                        "dc205de21d84"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-832842 -n no-preload-832842
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-832842 -n no-preload-832842: exit status 2 (368.697405ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-832842 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p no-preload-832842 logs -n 25: (1.180365438s)
helpers_test.go:261: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ stop    │ -p NoKubernetes-327438                                                                                                                                                                                                                        │ NoKubernetes-327438          │ jenkins │ v1.37.0 │ 17 Dec 25 19:58 UTC │ 17 Dec 25 19:58 UTC │
	│ start   │ -p NoKubernetes-327438 --driver=docker  --container-runtime=crio                                                                                                                                                                              │ NoKubernetes-327438          │ jenkins │ v1.37.0 │ 17 Dec 25 19:58 UTC │ 17 Dec 25 19:59 UTC │
	│ ssh     │ cert-options-997440 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-997440          │ jenkins │ v1.37.0 │ 17 Dec 25 19:59 UTC │ 17 Dec 25 19:59 UTC │
	│ ssh     │ -p cert-options-997440 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-997440          │ jenkins │ v1.37.0 │ 17 Dec 25 19:59 UTC │ 17 Dec 25 19:59 UTC │
	│ delete  │ -p cert-options-997440                                                                                                                                                                                                                        │ cert-options-997440          │ jenkins │ v1.37.0 │ 17 Dec 25 19:59 UTC │ 17 Dec 25 19:59 UTC │
	│ ssh     │ -p NoKubernetes-327438 sudo systemctl is-active --quiet service kubelet                                                                                                                                                                       │ NoKubernetes-327438          │ jenkins │ v1.37.0 │ 17 Dec 25 19:59 UTC │                     │
	│ start   │ -p old-k8s-version-894575 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-894575       │ jenkins │ v1.37.0 │ 17 Dec 25 19:59 UTC │ 17 Dec 25 19:59 UTC │
	│ delete  │ -p NoKubernetes-327438                                                                                                                                                                                                                        │ NoKubernetes-327438          │ jenkins │ v1.37.0 │ 17 Dec 25 19:59 UTC │ 17 Dec 25 19:59 UTC │
	│ delete  │ -p disable-driver-mounts-890254                                                                                                                                                                                                               │ disable-driver-mounts-890254 │ jenkins │ v1.37.0 │ 17 Dec 25 19:59 UTC │ 17 Dec 25 19:59 UTC │
	│ start   │ -p no-preload-832842 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1                                                                                  │ no-preload-832842            │ jenkins │ v1.37.0 │ 17 Dec 25 19:59 UTC │ 17 Dec 25 19:59 UTC │
	│ addons  │ enable metrics-server -p no-preload-832842 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-832842            │ jenkins │ v1.37.0 │ 17 Dec 25 20:00 UTC │                     │
	│ stop    │ -p no-preload-832842 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-832842            │ jenkins │ v1.37.0 │ 17 Dec 25 20:00 UTC │ 17 Dec 25 20:00 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-894575 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-894575       │ jenkins │ v1.37.0 │ 17 Dec 25 20:00 UTC │                     │
	│ stop    │ -p old-k8s-version-894575 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-894575       │ jenkins │ v1.37.0 │ 17 Dec 25 20:00 UTC │ 17 Dec 25 20:00 UTC │
	│ addons  │ enable dashboard -p no-preload-832842 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-832842            │ jenkins │ v1.37.0 │ 17 Dec 25 20:00 UTC │ 17 Dec 25 20:00 UTC │
	│ start   │ -p no-preload-832842 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1                                                                                  │ no-preload-832842            │ jenkins │ v1.37.0 │ 17 Dec 25 20:00 UTC │ 17 Dec 25 20:01 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-894575 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-894575       │ jenkins │ v1.37.0 │ 17 Dec 25 20:00 UTC │ 17 Dec 25 20:00 UTC │
	│ start   │ -p old-k8s-version-894575 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-894575       │ jenkins │ v1.37.0 │ 17 Dec 25 20:00 UTC │ 17 Dec 25 20:01 UTC │
	│ start   │ -p cert-expiration-059470 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-059470       │ jenkins │ v1.37.0 │ 17 Dec 25 20:00 UTC │ 17 Dec 25 20:00 UTC │
	│ delete  │ -p cert-expiration-059470                                                                                                                                                                                                                     │ cert-expiration-059470       │ jenkins │ v1.37.0 │ 17 Dec 25 20:00 UTC │ 17 Dec 25 20:00 UTC │
	│ start   │ -p default-k8s-diff-port-759234 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3                                                                      │ default-k8s-diff-port-759234 │ jenkins │ v1.37.0 │ 17 Dec 25 20:00 UTC │                     │
	│ image   │ no-preload-832842 image list --format=json                                                                                                                                                                                                    │ no-preload-832842            │ jenkins │ v1.37.0 │ 17 Dec 25 20:01 UTC │ 17 Dec 25 20:01 UTC │
	│ pause   │ -p no-preload-832842 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-832842            │ jenkins │ v1.37.0 │ 17 Dec 25 20:01 UTC │                     │
	│ image   │ old-k8s-version-894575 image list --format=json                                                                                                                                                                                               │ old-k8s-version-894575       │ jenkins │ v1.37.0 │ 17 Dec 25 20:01 UTC │ 17 Dec 25 20:01 UTC │
	│ pause   │ -p old-k8s-version-894575 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-894575       │ jenkins │ v1.37.0 │ 17 Dec 25 20:01 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/17 20:00:42
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1217 20:00:42.430475  631473 out.go:360] Setting OutFile to fd 1 ...
	I1217 20:00:42.430717  631473 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 20:00:42.430725  631473 out.go:374] Setting ErrFile to fd 2...
	I1217 20:00:42.430734  631473 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 20:00:42.430932  631473 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22186-372245/.minikube/bin
	I1217 20:00:42.431484  631473 out.go:368] Setting JSON to false
	I1217 20:00:42.432651  631473 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":6193,"bootTime":1765995449,"procs":333,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1217 20:00:42.432716  631473 start.go:143] virtualization: kvm guest
	I1217 20:00:42.434554  631473 out.go:179] * [default-k8s-diff-port-759234] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1217 20:00:42.436272  631473 out.go:179]   - MINIKUBE_LOCATION=22186
	I1217 20:00:42.436339  631473 notify.go:221] Checking for updates...
	I1217 20:00:42.438673  631473 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 20:00:42.439791  631473 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22186-372245/kubeconfig
	I1217 20:00:42.444253  631473 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22186-372245/.minikube
	I1217 20:00:42.445569  631473 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1217 20:00:42.446765  631473 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 20:00:42.448395  631473 config.go:182] Loaded profile config "kubernetes-upgrade-322567": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1217 20:00:42.448504  631473 config.go:182] Loaded profile config "no-preload-832842": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1217 20:00:42.448574  631473 config.go:182] Loaded profile config "old-k8s-version-894575": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1217 20:00:42.448676  631473 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 20:00:42.473152  631473 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1217 20:00:42.473303  631473 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 20:00:42.530715  631473 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:76 SystemTime:2025-12-17 20:00:42.520326347 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1217 20:00:42.530839  631473 docker.go:319] overlay module found
	I1217 20:00:42.533607  631473 out.go:179] * Using the docker driver based on user configuration
	I1217 20:00:42.534900  631473 start.go:309] selected driver: docker
	I1217 20:00:42.534931  631473 start.go:927] validating driver "docker" against <nil>
	I1217 20:00:42.534945  631473 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 20:00:42.535594  631473 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 20:00:42.593983  631473 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:76 SystemTime:2025-12-17 20:00:42.584279589 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1217 20:00:42.594185  631473 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1217 20:00:42.594402  631473 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1217 20:00:42.596050  631473 out.go:179] * Using Docker driver with root privileges
	I1217 20:00:42.597217  631473 cni.go:84] Creating CNI manager for ""
	I1217 20:00:42.597290  631473 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1217 20:00:42.597303  631473 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1217 20:00:42.597383  631473 start.go:353] cluster config:
	{Name:default-k8s-diff-port-759234 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:default-k8s-diff-port-759234 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SS
HAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 20:00:42.599022  631473 out.go:179] * Starting "default-k8s-diff-port-759234" primary control-plane node in "default-k8s-diff-port-759234" cluster
	I1217 20:00:42.600540  631473 cache.go:134] Beginning downloading kic base image for docker with crio
	I1217 20:00:42.601819  631473 out.go:179] * Pulling base image v0.0.48-1765966054-22186 ...
	I1217 20:00:42.603027  631473 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1217 20:00:42.603089  631473 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22186-372245/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4
	I1217 20:00:42.603104  631473 cache.go:65] Caching tarball of preloaded images
	I1217 20:00:42.603158  631473 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 in local docker daemon
	I1217 20:00:42.603241  631473 preload.go:238] Found /home/jenkins/minikube-integration/22186-372245/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1217 20:00:42.603255  631473 cache.go:68] Finished verifying existence of preloaded tar for v1.34.3 on crio
	I1217 20:00:42.603409  631473 profile.go:143] Saving config to /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/default-k8s-diff-port-759234/config.json ...
	I1217 20:00:42.603441  631473 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/default-k8s-diff-port-759234/config.json: {Name:mka62982d045e5cb058ac77025f345457b6a6373 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 20:00:42.624544  631473 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 in local docker daemon, skipping pull
	I1217 20:00:42.624564  631473 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 exists in daemon, skipping load
	I1217 20:00:42.624587  631473 cache.go:243] Successfully downloaded all kic artifacts
	I1217 20:00:42.624618  631473 start.go:360] acquireMachinesLock for default-k8s-diff-port-759234: {Name:mk173016aaa355dafae1bd5727aae1037817b426 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 20:00:42.624714  631473 start.go:364] duration metric: took 77.83µs to acquireMachinesLock for "default-k8s-diff-port-759234"
	I1217 20:00:42.624738  631473 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-759234 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:default-k8s-diff-port-759234 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:
false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1217 20:00:42.624812  631473 start.go:125] createHost starting for "" (driver="docker")
	W1217 20:00:39.572913  625400 pod_ready.go:104] pod "coredns-5dd5756b68-gbhs5" is not "Ready", error: <nil>
	W1217 20:00:42.072117  625400 pod_ready.go:104] pod "coredns-5dd5756b68-gbhs5" is not "Ready", error: <nil>
	W1217 20:00:44.072432  625400 pod_ready.go:104] pod "coredns-5dd5756b68-gbhs5" is not "Ready", error: <nil>
	W1217 20:00:42.104752  624471 pod_ready.go:104] pod "coredns-7d764666f9-988jw" is not "Ready", error: <nil>
	W1217 20:00:44.105460  624471 pod_ready.go:104] pod "coredns-7d764666f9-988jw" is not "Ready", error: <nil>
	I1217 20:00:44.011034  596882 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1217 20:00:44.011594  596882 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 20:00:44.011658  596882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 20:00:44.011708  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 20:00:44.044351  596882 cri.go:89] found id: "6822d1aff73905867cd00c8bd3d996a8d98a37c238f53bab351d576f0d6b34fc"
	I1217 20:00:44.044381  596882 cri.go:89] found id: ""
	I1217 20:00:44.044394  596882 logs.go:282] 1 containers: [6822d1aff73905867cd00c8bd3d996a8d98a37c238f53bab351d576f0d6b34fc]
	I1217 20:00:44.044463  596882 ssh_runner.go:195] Run: which crictl
	I1217 20:00:44.049338  596882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 20:00:44.049428  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 20:00:44.080283  596882 cri.go:89] found id: ""
	I1217 20:00:44.080314  596882 logs.go:282] 0 containers: []
	W1217 20:00:44.080326  596882 logs.go:284] No container was found matching "etcd"
	I1217 20:00:44.080337  596882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 20:00:44.080404  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 20:00:44.113789  596882 cri.go:89] found id: ""
	I1217 20:00:44.113818  596882 logs.go:282] 0 containers: []
	W1217 20:00:44.113829  596882 logs.go:284] No container was found matching "coredns"
	I1217 20:00:44.113835  596882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 20:00:44.113889  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 20:00:44.146485  596882 cri.go:89] found id: "26afbca819064c614a7c269e4fbe3f73beb12920c9989c7a9adca8a87b8aee29"
	I1217 20:00:44.146516  596882 cri.go:89] found id: ""
	I1217 20:00:44.146529  596882 logs.go:282] 1 containers: [26afbca819064c614a7c269e4fbe3f73beb12920c9989c7a9adca8a87b8aee29]
	I1217 20:00:44.146598  596882 ssh_runner.go:195] Run: which crictl
	I1217 20:00:44.150860  596882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 20:00:44.150933  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 20:00:44.180612  596882 cri.go:89] found id: ""
	I1217 20:00:44.180648  596882 logs.go:282] 0 containers: []
	W1217 20:00:44.180661  596882 logs.go:284] No container was found matching "kube-proxy"
	I1217 20:00:44.180669  596882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 20:00:44.180733  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 20:00:44.215315  596882 cri.go:89] found id: "deb0ef3d09cc535bcd10a8ecc98a8afc0243fdcaf4256b36cc91b5d3e2c3810c"
	I1217 20:00:44.215341  596882 cri.go:89] found id: ""
	I1217 20:00:44.215351  596882 logs.go:282] 1 containers: [deb0ef3d09cc535bcd10a8ecc98a8afc0243fdcaf4256b36cc91b5d3e2c3810c]
	I1217 20:00:44.215410  596882 ssh_runner.go:195] Run: which crictl
	I1217 20:00:44.219707  596882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 20:00:44.219792  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 20:00:44.250358  596882 cri.go:89] found id: ""
	I1217 20:00:44.250390  596882 logs.go:282] 0 containers: []
	W1217 20:00:44.250402  596882 logs.go:284] No container was found matching "kindnet"
	I1217 20:00:44.250410  596882 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1217 20:00:44.250480  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 20:00:44.279599  596882 cri.go:89] found id: ""
	I1217 20:00:44.279629  596882 logs.go:282] 0 containers: []
	W1217 20:00:44.279639  596882 logs.go:284] No container was found matching "storage-provisioner"
	I1217 20:00:44.279654  596882 logs.go:123] Gathering logs for kubelet ...
	I1217 20:00:44.279673  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 20:00:44.366299  596882 logs.go:123] Gathering logs for dmesg ...
	I1217 20:00:44.366333  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 20:00:44.383253  596882 logs.go:123] Gathering logs for describe nodes ...
	I1217 20:00:44.383288  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 20:00:44.442881  596882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 20:00:44.442906  596882 logs.go:123] Gathering logs for kube-apiserver [6822d1aff73905867cd00c8bd3d996a8d98a37c238f53bab351d576f0d6b34fc] ...
	I1217 20:00:44.442929  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6822d1aff73905867cd00c8bd3d996a8d98a37c238f53bab351d576f0d6b34fc"
	I1217 20:00:44.483060  596882 logs.go:123] Gathering logs for kube-scheduler [26afbca819064c614a7c269e4fbe3f73beb12920c9989c7a9adca8a87b8aee29] ...
	I1217 20:00:44.483124  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 26afbca819064c614a7c269e4fbe3f73beb12920c9989c7a9adca8a87b8aee29"
	I1217 20:00:44.514331  596882 logs.go:123] Gathering logs for kube-controller-manager [deb0ef3d09cc535bcd10a8ecc98a8afc0243fdcaf4256b36cc91b5d3e2c3810c] ...
	I1217 20:00:44.514367  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 deb0ef3d09cc535bcd10a8ecc98a8afc0243fdcaf4256b36cc91b5d3e2c3810c"
	I1217 20:00:44.542722  596882 logs.go:123] Gathering logs for CRI-O ...
	I1217 20:00:44.542760  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 20:00:44.590351  596882 logs.go:123] Gathering logs for container status ...
	I1217 20:00:44.590389  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 20:00:47.127294  596882 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1217 20:00:47.127787  596882 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 20:00:47.127853  596882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 20:00:47.127918  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 20:00:47.156370  596882 cri.go:89] found id: "6822d1aff73905867cd00c8bd3d996a8d98a37c238f53bab351d576f0d6b34fc"
	I1217 20:00:47.156396  596882 cri.go:89] found id: ""
	I1217 20:00:47.156404  596882 logs.go:282] 1 containers: [6822d1aff73905867cd00c8bd3d996a8d98a37c238f53bab351d576f0d6b34fc]
	I1217 20:00:47.156460  596882 ssh_runner.go:195] Run: which crictl
	I1217 20:00:47.160516  596882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 20:00:47.160594  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 20:00:47.195038  596882 cri.go:89] found id: ""
	I1217 20:00:47.195068  596882 logs.go:282] 0 containers: []
	W1217 20:00:47.195137  596882 logs.go:284] No container was found matching "etcd"
	I1217 20:00:47.195143  596882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 20:00:47.195196  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 20:00:47.226808  596882 cri.go:89] found id: ""
	I1217 20:00:47.226835  596882 logs.go:282] 0 containers: []
	W1217 20:00:47.226845  596882 logs.go:284] No container was found matching "coredns"
	I1217 20:00:47.226851  596882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 20:00:47.226903  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 20:00:42.626516  631473 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1217 20:00:42.626787  631473 start.go:159] libmachine.API.Create for "default-k8s-diff-port-759234" (driver="docker")
	I1217 20:00:42.626819  631473 client.go:173] LocalClient.Create starting
	I1217 20:00:42.626888  631473 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22186-372245/.minikube/certs/ca.pem
	I1217 20:00:42.626923  631473 main.go:143] libmachine: Decoding PEM data...
	I1217 20:00:42.626942  631473 main.go:143] libmachine: Parsing certificate...
	I1217 20:00:42.626999  631473 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22186-372245/.minikube/certs/cert.pem
	I1217 20:00:42.627020  631473 main.go:143] libmachine: Decoding PEM data...
	I1217 20:00:42.627031  631473 main.go:143] libmachine: Parsing certificate...
	I1217 20:00:42.627386  631473 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-759234 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1217 20:00:42.645356  631473 cli_runner.go:211] docker network inspect default-k8s-diff-port-759234 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1217 20:00:42.645431  631473 network_create.go:284] running [docker network inspect default-k8s-diff-port-759234] to gather additional debugging logs...
	I1217 20:00:42.645452  631473 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-759234
	W1217 20:00:42.662433  631473 cli_runner.go:211] docker network inspect default-k8s-diff-port-759234 returned with exit code 1
	I1217 20:00:42.662463  631473 network_create.go:287] error running [docker network inspect default-k8s-diff-port-759234]: docker network inspect default-k8s-diff-port-759234: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network default-k8s-diff-port-759234 not found
	I1217 20:00:42.662486  631473 network_create.go:289] output of [docker network inspect default-k8s-diff-port-759234]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network default-k8s-diff-port-759234 not found
	
	** /stderr **
	I1217 20:00:42.662577  631473 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1217 20:00:42.680765  631473 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-f64340259533 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:f6:0a:32:70:0d:35} reservation:<nil>}
	I1217 20:00:42.681557  631473 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-67abe6566c60 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:42:82:43:08:7c:e3} reservation:<nil>}
	I1217 20:00:42.682052  631473 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-f76d03f2ebfd IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:8e:bb:9b:fb:af:46} reservation:<nil>}
	I1217 20:00:42.682584  631473 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-4c731e2a052d IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:4e:e6:a7:52:2c:69} reservation:<nil>}
	I1217 20:00:42.683304  631473 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-f0ce1019d985 IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:26:5a:f7:51:9a:55} reservation:<nil>}
	I1217 20:00:42.684136  631473 network.go:206] using free private subnet 192.168.94.0/24: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001f4b420}
	I1217 20:00:42.684173  631473 network_create.go:124] attempt to create docker network default-k8s-diff-port-759234 192.168.94.0/24 with gateway 192.168.94.1 and MTU of 1500 ...
	I1217 20:00:42.684252  631473 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=default-k8s-diff-port-759234 default-k8s-diff-port-759234
	I1217 20:00:42.733976  631473 network_create.go:108] docker network default-k8s-diff-port-759234 192.168.94.0/24 created
	I1217 20:00:42.734006  631473 kic.go:121] calculated static IP "192.168.94.2" for the "default-k8s-diff-port-759234" container
	I1217 20:00:42.734062  631473 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1217 20:00:42.752583  631473 cli_runner.go:164] Run: docker volume create default-k8s-diff-port-759234 --label name.minikube.sigs.k8s.io=default-k8s-diff-port-759234 --label created_by.minikube.sigs.k8s.io=true
	I1217 20:00:42.773596  631473 oci.go:103] Successfully created a docker volume default-k8s-diff-port-759234
	I1217 20:00:42.773686  631473 cli_runner.go:164] Run: docker run --rm --name default-k8s-diff-port-759234-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-759234 --entrypoint /usr/bin/test -v default-k8s-diff-port-759234:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 -d /var/lib
	I1217 20:00:43.205798  631473 oci.go:107] Successfully prepared a docker volume default-k8s-diff-port-759234
	I1217 20:00:43.205868  631473 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1217 20:00:43.205880  631473 kic.go:194] Starting extracting preloaded images to volume ...
	I1217 20:00:43.205970  631473 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22186-372245/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-759234:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 -I lz4 -xf /preloaded.tar -C /extractDir
	I1217 20:00:47.198577  631473 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22186-372245/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-759234:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 -I lz4 -xf /preloaded.tar -C /extractDir: (3.992562765s)
	I1217 20:00:47.198609  631473 kic.go:203] duration metric: took 3.992725296s to extract preloaded images to volume ...
	W1217 20:00:47.198694  631473 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1217 20:00:47.198723  631473 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1217 20:00:47.198767  631473 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1217 20:00:47.260923  631473 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname default-k8s-diff-port-759234 --name default-k8s-diff-port-759234 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-759234 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=default-k8s-diff-port-759234 --network default-k8s-diff-port-759234 --ip 192.168.94.2 --volume default-k8s-diff-port-759234:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8444 --publish=127.0.0.1::8444 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0
	W1217 20:00:46.572829  625400 pod_ready.go:104] pod "coredns-5dd5756b68-gbhs5" is not "Ready", error: <nil>
	W1217 20:00:49.072264  625400 pod_ready.go:104] pod "coredns-5dd5756b68-gbhs5" is not "Ready", error: <nil>
	W1217 20:00:46.605455  624471 pod_ready.go:104] pod "coredns-7d764666f9-988jw" is not "Ready", error: <nil>
	W1217 20:00:49.104308  624471 pod_ready.go:104] pod "coredns-7d764666f9-988jw" is not "Ready", error: <nil>
	I1217 20:00:47.261698  596882 cri.go:89] found id: "26afbca819064c614a7c269e4fbe3f73beb12920c9989c7a9adca8a87b8aee29"
	I1217 20:00:47.261722  596882 cri.go:89] found id: ""
	I1217 20:00:47.261733  596882 logs.go:282] 1 containers: [26afbca819064c614a7c269e4fbe3f73beb12920c9989c7a9adca8a87b8aee29]
	I1217 20:00:47.261790  596882 ssh_runner.go:195] Run: which crictl
	I1217 20:00:47.267357  596882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 20:00:47.267438  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 20:00:47.306726  596882 cri.go:89] found id: ""
	I1217 20:00:47.306759  596882 logs.go:282] 0 containers: []
	W1217 20:00:47.306770  596882 logs.go:284] No container was found matching "kube-proxy"
	I1217 20:00:47.306778  596882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 20:00:47.306842  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 20:00:47.340875  596882 cri.go:89] found id: "deb0ef3d09cc535bcd10a8ecc98a8afc0243fdcaf4256b36cc91b5d3e2c3810c"
	I1217 20:00:47.340912  596882 cri.go:89] found id: ""
	I1217 20:00:47.340924  596882 logs.go:282] 1 containers: [deb0ef3d09cc535bcd10a8ecc98a8afc0243fdcaf4256b36cc91b5d3e2c3810c]
	I1217 20:00:47.341135  596882 ssh_runner.go:195] Run: which crictl
	I1217 20:00:47.345736  596882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 20:00:47.345806  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 20:00:47.376962  596882 cri.go:89] found id: ""
	I1217 20:00:47.377012  596882 logs.go:282] 0 containers: []
	W1217 20:00:47.377025  596882 logs.go:284] No container was found matching "kindnet"
	I1217 20:00:47.377032  596882 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1217 20:00:47.377124  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 20:00:47.407325  596882 cri.go:89] found id: ""
	I1217 20:00:47.407359  596882 logs.go:282] 0 containers: []
	W1217 20:00:47.407374  596882 logs.go:284] No container was found matching "storage-provisioner"
	I1217 20:00:47.407387  596882 logs.go:123] Gathering logs for describe nodes ...
	I1217 20:00:47.407408  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 20:00:47.473703  596882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 20:00:47.473725  596882 logs.go:123] Gathering logs for kube-apiserver [6822d1aff73905867cd00c8bd3d996a8d98a37c238f53bab351d576f0d6b34fc] ...
	I1217 20:00:47.473743  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6822d1aff73905867cd00c8bd3d996a8d98a37c238f53bab351d576f0d6b34fc"
	I1217 20:00:47.508764  596882 logs.go:123] Gathering logs for kube-scheduler [26afbca819064c614a7c269e4fbe3f73beb12920c9989c7a9adca8a87b8aee29] ...
	I1217 20:00:47.508811  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 26afbca819064c614a7c269e4fbe3f73beb12920c9989c7a9adca8a87b8aee29"
	I1217 20:00:47.539065  596882 logs.go:123] Gathering logs for kube-controller-manager [deb0ef3d09cc535bcd10a8ecc98a8afc0243fdcaf4256b36cc91b5d3e2c3810c] ...
	I1217 20:00:47.539113  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 deb0ef3d09cc535bcd10a8ecc98a8afc0243fdcaf4256b36cc91b5d3e2c3810c"
	I1217 20:00:47.571543  596882 logs.go:123] Gathering logs for CRI-O ...
	I1217 20:00:47.571587  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 20:00:47.643416  596882 logs.go:123] Gathering logs for container status ...
	I1217 20:00:47.643456  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 20:00:47.689273  596882 logs.go:123] Gathering logs for kubelet ...
	I1217 20:00:47.689316  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 20:00:47.823222  596882 logs.go:123] Gathering logs for dmesg ...
	I1217 20:00:47.823260  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 20:00:50.347237  596882 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1217 20:00:50.347659  596882 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 20:00:50.347717  596882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 20:00:50.348197  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 20:00:50.391187  596882 cri.go:89] found id: "6822d1aff73905867cd00c8bd3d996a8d98a37c238f53bab351d576f0d6b34fc"
	I1217 20:00:50.391339  596882 cri.go:89] found id: ""
	I1217 20:00:50.391419  596882 logs.go:282] 1 containers: [6822d1aff73905867cd00c8bd3d996a8d98a37c238f53bab351d576f0d6b34fc]
	I1217 20:00:50.391505  596882 ssh_runner.go:195] Run: which crictl
	I1217 20:00:50.396902  596882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 20:00:50.397015  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 20:00:50.441286  596882 cri.go:89] found id: ""
	I1217 20:00:50.441360  596882 logs.go:282] 0 containers: []
	W1217 20:00:50.441373  596882 logs.go:284] No container was found matching "etcd"
	I1217 20:00:50.441389  596882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 20:00:50.441452  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 20:00:50.479045  596882 cri.go:89] found id: ""
	I1217 20:00:50.479088  596882 logs.go:282] 0 containers: []
	W1217 20:00:50.479100  596882 logs.go:284] No container was found matching "coredns"
	I1217 20:00:50.479108  596882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 20:00:50.479174  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 20:00:50.515926  596882 cri.go:89] found id: "26afbca819064c614a7c269e4fbe3f73beb12920c9989c7a9adca8a87b8aee29"
	I1217 20:00:50.516275  596882 cri.go:89] found id: ""
	I1217 20:00:50.516295  596882 logs.go:282] 1 containers: [26afbca819064c614a7c269e4fbe3f73beb12920c9989c7a9adca8a87b8aee29]
	I1217 20:00:50.516365  596882 ssh_runner.go:195] Run: which crictl
	I1217 20:00:50.522153  596882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 20:00:50.522238  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 20:00:50.562124  596882 cri.go:89] found id: ""
	I1217 20:00:50.562187  596882 logs.go:282] 0 containers: []
	W1217 20:00:50.562199  596882 logs.go:284] No container was found matching "kube-proxy"
	I1217 20:00:50.562208  596882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 20:00:50.562277  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 20:00:50.601222  596882 cri.go:89] found id: "deb0ef3d09cc535bcd10a8ecc98a8afc0243fdcaf4256b36cc91b5d3e2c3810c"
	I1217 20:00:50.601377  596882 cri.go:89] found id: ""
	I1217 20:00:50.601396  596882 logs.go:282] 1 containers: [deb0ef3d09cc535bcd10a8ecc98a8afc0243fdcaf4256b36cc91b5d3e2c3810c]
	I1217 20:00:50.601522  596882 ssh_runner.go:195] Run: which crictl
	I1217 20:00:50.607093  596882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 20:00:50.607179  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 20:00:50.643677  596882 cri.go:89] found id: ""
	I1217 20:00:50.643709  596882 logs.go:282] 0 containers: []
	W1217 20:00:50.643725  596882 logs.go:284] No container was found matching "kindnet"
	I1217 20:00:50.643734  596882 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1217 20:00:50.643810  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 20:00:50.683346  596882 cri.go:89] found id: ""
	I1217 20:00:50.683378  596882 logs.go:282] 0 containers: []
	W1217 20:00:50.683389  596882 logs.go:284] No container was found matching "storage-provisioner"
	I1217 20:00:50.683402  596882 logs.go:123] Gathering logs for kubelet ...
	I1217 20:00:50.683418  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 20:00:50.807284  596882 logs.go:123] Gathering logs for dmesg ...
	I1217 20:00:50.807323  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 20:00:50.829965  596882 logs.go:123] Gathering logs for describe nodes ...
	I1217 20:00:50.830005  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 20:00:50.903560  596882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 20:00:50.903583  596882 logs.go:123] Gathering logs for kube-apiserver [6822d1aff73905867cd00c8bd3d996a8d98a37c238f53bab351d576f0d6b34fc] ...
	I1217 20:00:50.903608  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6822d1aff73905867cd00c8bd3d996a8d98a37c238f53bab351d576f0d6b34fc"
	I1217 20:00:50.952336  596882 logs.go:123] Gathering logs for kube-scheduler [26afbca819064c614a7c269e4fbe3f73beb12920c9989c7a9adca8a87b8aee29] ...
	I1217 20:00:50.952375  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 26afbca819064c614a7c269e4fbe3f73beb12920c9989c7a9adca8a87b8aee29"
	I1217 20:00:50.986508  596882 logs.go:123] Gathering logs for kube-controller-manager [deb0ef3d09cc535bcd10a8ecc98a8afc0243fdcaf4256b36cc91b5d3e2c3810c] ...
	I1217 20:00:50.986545  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 deb0ef3d09cc535bcd10a8ecc98a8afc0243fdcaf4256b36cc91b5d3e2c3810c"
	I1217 20:00:51.022486  596882 logs.go:123] Gathering logs for CRI-O ...
	I1217 20:00:51.022517  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 20:00:51.088659  596882 logs.go:123] Gathering logs for container status ...
	I1217 20:00:51.088715  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 20:00:47.583096  631473 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-759234 --format={{.State.Running}}
	I1217 20:00:47.608914  631473 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-759234 --format={{.State.Status}}
	I1217 20:00:47.634283  631473 cli_runner.go:164] Run: docker exec default-k8s-diff-port-759234 stat /var/lib/dpkg/alternatives/iptables
	I1217 20:00:47.694519  631473 oci.go:144] the created container "default-k8s-diff-port-759234" has a running status.
	I1217 20:00:47.694556  631473 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22186-372245/.minikube/machines/default-k8s-diff-port-759234/id_rsa...
	I1217 20:00:47.741322  631473 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22186-372245/.minikube/machines/default-k8s-diff-port-759234/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1217 20:00:47.777682  631473 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-759234 --format={{.State.Status}}
	I1217 20:00:47.801570  631473 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1217 20:00:47.801595  631473 kic_runner.go:114] Args: [docker exec --privileged default-k8s-diff-port-759234 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1217 20:00:47.858176  631473 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-759234 --format={{.State.Status}}
	I1217 20:00:47.886441  631473 machine.go:94] provisionDockerMachine start ...
	I1217 20:00:47.886562  631473 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-759234
	I1217 20:00:47.913250  631473 main.go:143] libmachine: Using SSH client type: native
	I1217 20:00:47.913628  631473 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33453 <nil> <nil>}
	I1217 20:00:47.913655  631473 main.go:143] libmachine: About to run SSH command:
	hostname
	I1217 20:00:47.914572  631473 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:49044->127.0.0.1:33453: read: connection reset by peer
	I1217 20:00:51.082474  631473 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-759234
	
	I1217 20:00:51.082503  631473 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-759234"
	I1217 20:00:51.082569  631473 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-759234
	I1217 20:00:51.109173  631473 main.go:143] libmachine: Using SSH client type: native
	I1217 20:00:51.109464  631473 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33453 <nil> <nil>}
	I1217 20:00:51.109487  631473 main.go:143] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-759234 && echo "default-k8s-diff-port-759234" | sudo tee /etc/hostname
	I1217 20:00:51.282514  631473 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-759234
	
	I1217 20:00:51.282597  631473 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-759234
	I1217 20:00:51.302139  631473 main.go:143] libmachine: Using SSH client type: native
	I1217 20:00:51.302370  631473 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33453 <nil> <nil>}
	I1217 20:00:51.302388  631473 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-759234' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-759234/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-759234' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1217 20:00:51.456372  631473 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1217 20:00:51.456426  631473 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22186-372245/.minikube CaCertPath:/home/jenkins/minikube-integration/22186-372245/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22186-372245/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22186-372245/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22186-372245/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22186-372245/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22186-372245/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22186-372245/.minikube}
	I1217 20:00:51.456479  631473 ubuntu.go:190] setting up certificates
	I1217 20:00:51.456491  631473 provision.go:84] configureAuth start
	I1217 20:00:51.456563  631473 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-759234
	I1217 20:00:51.480508  631473 provision.go:143] copyHostCerts
	I1217 20:00:51.480576  631473 exec_runner.go:144] found /home/jenkins/minikube-integration/22186-372245/.minikube/key.pem, removing ...
	I1217 20:00:51.480592  631473 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22186-372245/.minikube/key.pem
	I1217 20:00:51.480669  631473 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22186-372245/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22186-372245/.minikube/key.pem (1675 bytes)
	I1217 20:00:51.480772  631473 exec_runner.go:144] found /home/jenkins/minikube-integration/22186-372245/.minikube/ca.pem, removing ...
	I1217 20:00:51.480783  631473 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22186-372245/.minikube/ca.pem
	I1217 20:00:51.480822  631473 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22186-372245/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22186-372245/.minikube/ca.pem (1082 bytes)
	I1217 20:00:51.480896  631473 exec_runner.go:144] found /home/jenkins/minikube-integration/22186-372245/.minikube/cert.pem, removing ...
	I1217 20:00:51.480906  631473 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22186-372245/.minikube/cert.pem
	I1217 20:00:51.480938  631473 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22186-372245/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22186-372245/.minikube/cert.pem (1123 bytes)
	I1217 20:00:51.481006  631473 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22186-372245/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22186-372245/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22186-372245/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-759234 san=[127.0.0.1 192.168.94.2 default-k8s-diff-port-759234 localhost minikube]
	I1217 20:00:51.633655  631473 provision.go:177] copyRemoteCerts
	I1217 20:00:51.633763  631473 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1217 20:00:51.633814  631473 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-759234
	I1217 20:00:51.658060  631473 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33453 SSHKeyPath:/home/jenkins/minikube-integration/22186-372245/.minikube/machines/default-k8s-diff-port-759234/id_rsa Username:docker}
	I1217 20:00:51.774263  631473 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1217 20:00:51.836683  631473 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1217 20:00:51.862224  631473 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1217 20:00:51.890608  631473 provision.go:87] duration metric: took 434.096039ms to configureAuth
	I1217 20:00:51.890644  631473 ubuntu.go:206] setting minikube options for container-runtime
	I1217 20:00:51.890863  631473 config.go:182] Loaded profile config "default-k8s-diff-port-759234": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 20:00:51.891022  631473 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-759234
	I1217 20:00:51.916236  631473 main.go:143] libmachine: Using SSH client type: native
	I1217 20:00:51.916552  631473 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33453 <nil> <nil>}
	I1217 20:00:51.916578  631473 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1217 20:00:52.350209  631473 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1217 20:00:52.350238  631473 machine.go:97] duration metric: took 4.46376868s to provisionDockerMachine
	I1217 20:00:52.350253  631473 client.go:176] duration metric: took 9.723424305s to LocalClient.Create
	I1217 20:00:52.350277  631473 start.go:167] duration metric: took 9.72348972s to libmachine.API.Create "default-k8s-diff-port-759234"
	I1217 20:00:52.350294  631473 start.go:293] postStartSetup for "default-k8s-diff-port-759234" (driver="docker")
	I1217 20:00:52.350305  631473 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1217 20:00:52.350383  631473 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1217 20:00:52.350429  631473 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-759234
	I1217 20:00:52.369228  631473 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33453 SSHKeyPath:/home/jenkins/minikube-integration/22186-372245/.minikube/machines/default-k8s-diff-port-759234/id_rsa Username:docker}
	I1217 20:00:52.477868  631473 ssh_runner.go:195] Run: cat /etc/os-release
	I1217 20:00:52.482314  631473 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1217 20:00:52.482357  631473 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1217 20:00:52.482372  631473 filesync.go:126] Scanning /home/jenkins/minikube-integration/22186-372245/.minikube/addons for local assets ...
	I1217 20:00:52.482454  631473 filesync.go:126] Scanning /home/jenkins/minikube-integration/22186-372245/.minikube/files for local assets ...
	I1217 20:00:52.482534  631473 filesync.go:149] local asset: /home/jenkins/minikube-integration/22186-372245/.minikube/files/etc/ssl/certs/3757972.pem -> 3757972.pem in /etc/ssl/certs
	I1217 20:00:52.482625  631473 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1217 20:00:52.491557  631473 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/files/etc/ssl/certs/3757972.pem --> /etc/ssl/certs/3757972.pem (1708 bytes)
	I1217 20:00:52.515015  631473 start.go:296] duration metric: took 164.702667ms for postStartSetup
	I1217 20:00:52.515418  631473 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-759234
	I1217 20:00:52.535477  631473 profile.go:143] Saving config to /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/default-k8s-diff-port-759234/config.json ...
	I1217 20:00:52.535813  631473 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 20:00:52.535873  631473 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-759234
	I1217 20:00:52.555517  631473 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33453 SSHKeyPath:/home/jenkins/minikube-integration/22186-372245/.minikube/machines/default-k8s-diff-port-759234/id_rsa Username:docker}
	I1217 20:00:52.657422  631473 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1217 20:00:52.662205  631473 start.go:128] duration metric: took 10.037371351s to createHost
	I1217 20:00:52.662241  631473 start.go:83] releasing machines lock for "default-k8s-diff-port-759234", held for 10.037515093s
	I1217 20:00:52.662322  631473 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-759234
	I1217 20:00:52.680193  631473 ssh_runner.go:195] Run: cat /version.json
	I1217 20:00:52.680276  631473 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1217 20:00:52.680310  631473 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-759234
	I1217 20:00:52.680347  631473 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-759234
	I1217 20:00:52.701061  631473 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33453 SSHKeyPath:/home/jenkins/minikube-integration/22186-372245/.minikube/machines/default-k8s-diff-port-759234/id_rsa Username:docker}
	I1217 20:00:52.701301  631473 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33453 SSHKeyPath:/home/jenkins/minikube-integration/22186-372245/.minikube/machines/default-k8s-diff-port-759234/id_rsa Username:docker}
	I1217 20:00:52.851661  631473 ssh_runner.go:195] Run: systemctl --version
	I1217 20:00:52.858481  631473 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1217 20:00:52.893608  631473 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1217 20:00:52.898824  631473 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1217 20:00:52.898902  631473 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1217 20:00:52.924893  631473 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1217 20:00:52.924917  631473 start.go:496] detecting cgroup driver to use...
	I1217 20:00:52.924946  631473 detect.go:190] detected "systemd" cgroup driver on host os
	I1217 20:00:52.924995  631473 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1217 20:00:52.941996  631473 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1217 20:00:52.954497  631473 docker.go:218] disabling cri-docker service (if available) ...
	I1217 20:00:52.954559  631473 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1217 20:00:52.971423  631473 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1217 20:00:52.990488  631473 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1217 20:00:53.079469  631473 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1217 20:00:53.166815  631473 docker.go:234] disabling docker service ...
	I1217 20:00:53.166878  631473 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1217 20:00:53.186920  631473 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1217 20:00:53.200855  631473 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1217 20:00:53.290366  631473 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1217 20:00:53.387334  631473 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1217 20:00:53.400172  631473 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1217 20:00:53.415056  631473 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1217 20:00:53.415136  631473 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:00:53.425540  631473 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1217 20:00:53.425617  631473 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:00:53.435225  631473 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:00:53.444865  631473 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:00:53.455024  631473 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1217 20:00:53.464046  631473 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:00:53.473632  631473 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:00:53.488327  631473 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:00:53.498230  631473 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1217 20:00:53.506887  631473 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1217 20:00:53.516474  631473 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 20:00:53.601252  631473 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1217 20:00:54.068135  631473 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1217 20:00:54.068217  631473 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1217 20:00:54.073472  631473 start.go:564] Will wait 60s for crictl version
	I1217 20:00:54.073554  631473 ssh_runner.go:195] Run: which crictl
	I1217 20:00:54.078383  631473 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1217 20:00:54.106787  631473 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1217 20:00:54.106878  631473 ssh_runner.go:195] Run: crio --version
	I1217 20:00:54.140042  631473 ssh_runner.go:195] Run: crio --version
	I1217 20:00:54.172909  631473 out.go:179] * Preparing Kubernetes v1.34.3 on CRI-O 1.34.3 ...
	W1217 20:00:51.073128  625400 pod_ready.go:104] pod "coredns-5dd5756b68-gbhs5" is not "Ready", error: <nil>
	W1217 20:00:53.572242  625400 pod_ready.go:104] pod "coredns-5dd5756b68-gbhs5" is not "Ready", error: <nil>
	W1217 20:00:51.105457  624471 pod_ready.go:104] pod "coredns-7d764666f9-988jw" is not "Ready", error: <nil>
	W1217 20:00:53.606663  624471 pod_ready.go:104] pod "coredns-7d764666f9-988jw" is not "Ready", error: <nil>
	I1217 20:00:53.632189  596882 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1217 20:00:53.632791  596882 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 20:00:53.632867  596882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 20:00:53.632941  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 20:00:53.662308  596882 cri.go:89] found id: "6822d1aff73905867cd00c8bd3d996a8d98a37c238f53bab351d576f0d6b34fc"
	I1217 20:00:53.662339  596882 cri.go:89] found id: ""
	I1217 20:00:53.662350  596882 logs.go:282] 1 containers: [6822d1aff73905867cd00c8bd3d996a8d98a37c238f53bab351d576f0d6b34fc]
	I1217 20:00:53.662420  596882 ssh_runner.go:195] Run: which crictl
	I1217 20:00:53.666413  596882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 20:00:53.666495  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 20:00:53.695377  596882 cri.go:89] found id: ""
	I1217 20:00:53.695409  596882 logs.go:282] 0 containers: []
	W1217 20:00:53.695421  596882 logs.go:284] No container was found matching "etcd"
	I1217 20:00:53.695429  596882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 20:00:53.695516  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 20:00:53.724146  596882 cri.go:89] found id: ""
	I1217 20:00:53.724177  596882 logs.go:282] 0 containers: []
	W1217 20:00:53.724187  596882 logs.go:284] No container was found matching "coredns"
	I1217 20:00:53.724252  596882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 20:00:53.724349  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 20:00:53.752962  596882 cri.go:89] found id: "26afbca819064c614a7c269e4fbe3f73beb12920c9989c7a9adca8a87b8aee29"
	I1217 20:00:53.752990  596882 cri.go:89] found id: ""
	I1217 20:00:53.753000  596882 logs.go:282] 1 containers: [26afbca819064c614a7c269e4fbe3f73beb12920c9989c7a9adca8a87b8aee29]
	I1217 20:00:53.753058  596882 ssh_runner.go:195] Run: which crictl
	I1217 20:00:53.757461  596882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 20:00:53.757549  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 20:00:53.785748  596882 cri.go:89] found id: ""
	I1217 20:00:53.785774  596882 logs.go:282] 0 containers: []
	W1217 20:00:53.785785  596882 logs.go:284] No container was found matching "kube-proxy"
	I1217 20:00:53.785792  596882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 20:00:53.785862  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 20:00:53.815860  596882 cri.go:89] found id: "deb0ef3d09cc535bcd10a8ecc98a8afc0243fdcaf4256b36cc91b5d3e2c3810c"
	I1217 20:00:53.815889  596882 cri.go:89] found id: ""
	I1217 20:00:53.815899  596882 logs.go:282] 1 containers: [deb0ef3d09cc535bcd10a8ecc98a8afc0243fdcaf4256b36cc91b5d3e2c3810c]
	I1217 20:00:53.815952  596882 ssh_runner.go:195] Run: which crictl
	I1217 20:00:53.820565  596882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 20:00:53.820632  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 20:00:53.847814  596882 cri.go:89] found id: ""
	I1217 20:00:53.847839  596882 logs.go:282] 0 containers: []
	W1217 20:00:53.847850  596882 logs.go:284] No container was found matching "kindnet"
	I1217 20:00:53.847857  596882 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1217 20:00:53.847920  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 20:00:53.876185  596882 cri.go:89] found id: ""
	I1217 20:00:53.876218  596882 logs.go:282] 0 containers: []
	W1217 20:00:53.876230  596882 logs.go:284] No container was found matching "storage-provisioner"
	I1217 20:00:53.876244  596882 logs.go:123] Gathering logs for kubelet ...
	I1217 20:00:53.876259  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 20:00:53.971642  596882 logs.go:123] Gathering logs for dmesg ...
	I1217 20:00:53.971693  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 20:00:53.990638  596882 logs.go:123] Gathering logs for describe nodes ...
	I1217 20:00:53.990675  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 20:00:54.050668  596882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 20:00:54.050692  596882 logs.go:123] Gathering logs for kube-apiserver [6822d1aff73905867cd00c8bd3d996a8d98a37c238f53bab351d576f0d6b34fc] ...
	I1217 20:00:54.050707  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6822d1aff73905867cd00c8bd3d996a8d98a37c238f53bab351d576f0d6b34fc"
	I1217 20:00:54.084846  596882 logs.go:123] Gathering logs for kube-scheduler [26afbca819064c614a7c269e4fbe3f73beb12920c9989c7a9adca8a87b8aee29] ...
	I1217 20:00:54.084893  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 26afbca819064c614a7c269e4fbe3f73beb12920c9989c7a9adca8a87b8aee29"
	I1217 20:00:54.115061  596882 logs.go:123] Gathering logs for kube-controller-manager [deb0ef3d09cc535bcd10a8ecc98a8afc0243fdcaf4256b36cc91b5d3e2c3810c] ...
	I1217 20:00:54.115108  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 deb0ef3d09cc535bcd10a8ecc98a8afc0243fdcaf4256b36cc91b5d3e2c3810c"
	I1217 20:00:54.146463  596882 logs.go:123] Gathering logs for CRI-O ...
	I1217 20:00:54.146491  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 20:00:54.199121  596882 logs.go:123] Gathering logs for container status ...
	I1217 20:00:54.199159  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 20:00:56.736153  596882 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1217 20:00:56.736638  596882 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 20:00:56.736693  596882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 20:00:56.736746  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 20:00:56.765576  596882 cri.go:89] found id: "6822d1aff73905867cd00c8bd3d996a8d98a37c238f53bab351d576f0d6b34fc"
	I1217 20:00:56.765600  596882 cri.go:89] found id: ""
	I1217 20:00:56.765610  596882 logs.go:282] 1 containers: [6822d1aff73905867cd00c8bd3d996a8d98a37c238f53bab351d576f0d6b34fc]
	I1217 20:00:56.765676  596882 ssh_runner.go:195] Run: which crictl
	I1217 20:00:56.769942  596882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 20:00:56.770013  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 20:00:56.798112  596882 cri.go:89] found id: ""
	I1217 20:00:56.798145  596882 logs.go:282] 0 containers: []
	W1217 20:00:56.798157  596882 logs.go:284] No container was found matching "etcd"
	I1217 20:00:56.798165  596882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 20:00:56.798234  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 20:00:56.825167  596882 cri.go:89] found id: ""
	I1217 20:00:56.825200  596882 logs.go:282] 0 containers: []
	W1217 20:00:56.825231  596882 logs.go:284] No container was found matching "coredns"
	I1217 20:00:56.825247  596882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 20:00:56.825311  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 20:00:56.852568  596882 cri.go:89] found id: "26afbca819064c614a7c269e4fbe3f73beb12920c9989c7a9adca8a87b8aee29"
	I1217 20:00:56.852592  596882 cri.go:89] found id: ""
	I1217 20:00:56.852602  596882 logs.go:282] 1 containers: [26afbca819064c614a7c269e4fbe3f73beb12920c9989c7a9adca8a87b8aee29]
	I1217 20:00:56.852661  596882 ssh_runner.go:195] Run: which crictl
	I1217 20:00:56.856829  596882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 20:00:56.856902  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 20:00:56.883929  596882 cri.go:89] found id: ""
	I1217 20:00:56.883973  596882 logs.go:282] 0 containers: []
	W1217 20:00:56.883986  596882 logs.go:284] No container was found matching "kube-proxy"
	I1217 20:00:56.883999  596882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 20:00:56.884062  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 20:00:56.911693  596882 cri.go:89] found id: "deb0ef3d09cc535bcd10a8ecc98a8afc0243fdcaf4256b36cc91b5d3e2c3810c"
	I1217 20:00:56.911714  596882 cri.go:89] found id: ""
	I1217 20:00:56.911722  596882 logs.go:282] 1 containers: [deb0ef3d09cc535bcd10a8ecc98a8afc0243fdcaf4256b36cc91b5d3e2c3810c]
	I1217 20:00:56.911772  596882 ssh_runner.go:195] Run: which crictl
	I1217 20:00:56.916212  596882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 20:00:56.916276  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 20:00:56.942585  596882 cri.go:89] found id: ""
	I1217 20:00:56.942617  596882 logs.go:282] 0 containers: []
	W1217 20:00:56.942633  596882 logs.go:284] No container was found matching "kindnet"
	I1217 20:00:56.942642  596882 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1217 20:00:56.942700  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 20:00:56.971939  596882 cri.go:89] found id: ""
	I1217 20:00:56.971976  596882 logs.go:282] 0 containers: []
	W1217 20:00:56.971990  596882 logs.go:284] No container was found matching "storage-provisioner"
	I1217 20:00:56.972004  596882 logs.go:123] Gathering logs for kube-scheduler [26afbca819064c614a7c269e4fbe3f73beb12920c9989c7a9adca8a87b8aee29] ...
	I1217 20:00:56.972024  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 26afbca819064c614a7c269e4fbe3f73beb12920c9989c7a9adca8a87b8aee29"
	I1217 20:00:57.001777  596882 logs.go:123] Gathering logs for kube-controller-manager [deb0ef3d09cc535bcd10a8ecc98a8afc0243fdcaf4256b36cc91b5d3e2c3810c] ...
	I1217 20:00:57.001806  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 deb0ef3d09cc535bcd10a8ecc98a8afc0243fdcaf4256b36cc91b5d3e2c3810c"
	I1217 20:00:57.032936  596882 logs.go:123] Gathering logs for CRI-O ...
	I1217 20:00:57.032965  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 20:00:57.078327  596882 logs.go:123] Gathering logs for container status ...
	I1217 20:00:57.078364  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 20:00:57.113176  596882 logs.go:123] Gathering logs for kubelet ...
	I1217 20:00:57.113213  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 20:00:57.201920  596882 logs.go:123] Gathering logs for dmesg ...
	I1217 20:00:57.201957  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 20:00:57.218426  596882 logs.go:123] Gathering logs for describe nodes ...
	I1217 20:00:57.218456  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1217 20:00:54.174562  631473 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-759234 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1217 20:00:54.194566  631473 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1217 20:00:54.199116  631473 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 20:00:54.210935  631473 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-759234 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:default-k8s-diff-port-759234 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false C
ustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1217 20:00:54.211103  631473 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1217 20:00:54.211184  631473 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 20:00:54.248494  631473 crio.go:514] all images are preloaded for cri-o runtime.
	I1217 20:00:54.248518  631473 crio.go:433] Images already preloaded, skipping extraction
	I1217 20:00:54.248568  631473 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 20:00:54.273697  631473 crio.go:514] all images are preloaded for cri-o runtime.
	I1217 20:00:54.273726  631473 cache_images.go:86] Images are preloaded, skipping loading
	I1217 20:00:54.273735  631473 kubeadm.go:935] updating node { 192.168.94.2 8444 v1.34.3 crio true true} ...
	I1217 20:00:54.273832  631473 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-759234 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.3 ClusterName:default-k8s-diff-port-759234 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1217 20:00:54.273935  631473 ssh_runner.go:195] Run: crio config
	I1217 20:00:54.323646  631473 cni.go:84] Creating CNI manager for ""
	I1217 20:00:54.323671  631473 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1217 20:00:54.323691  631473 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1217 20:00:54.323723  631473 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8444 KubernetesVersion:v1.34.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-759234 NodeName:default-k8s-diff-port-759234 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1217 20:00:54.323843  631473 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-759234"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1217 20:00:54.323910  631473 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.3
	I1217 20:00:54.333287  631473 binaries.go:51] Found k8s binaries, skipping transfer
	I1217 20:00:54.333359  631473 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1217 20:00:54.341865  631473 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1217 20:00:54.355367  631473 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1217 20:00:54.370136  631473 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2224 bytes)
	I1217 20:00:54.383695  631473 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1217 20:00:54.387416  631473 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 20:00:54.397752  631473 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 20:00:54.478375  631473 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 20:00:54.502901  631473 certs.go:69] Setting up /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/default-k8s-diff-port-759234 for IP: 192.168.94.2
	I1217 20:00:54.502928  631473 certs.go:195] generating shared ca certs ...
	I1217 20:00:54.502956  631473 certs.go:227] acquiring lock for ca certs: {Name:mk6c0a4a99609de13fb0b54aca94f9165cc7856c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 20:00:54.503145  631473 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22186-372245/.minikube/ca.key
	I1217 20:00:54.503202  631473 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22186-372245/.minikube/proxy-client-ca.key
	I1217 20:00:54.503217  631473 certs.go:257] generating profile certs ...
	I1217 20:00:54.503295  631473 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/default-k8s-diff-port-759234/client.key
	I1217 20:00:54.503322  631473 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/default-k8s-diff-port-759234/client.crt with IP's: []
	I1217 20:00:54.617711  631473 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/default-k8s-diff-port-759234/client.crt ...
	I1217 20:00:54.617747  631473 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/default-k8s-diff-port-759234/client.crt: {Name:mk5d78d7f68addaf1f73847c6c02fd442f5e6ddd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 20:00:54.617930  631473 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/default-k8s-diff-port-759234/client.key ...
	I1217 20:00:54.617950  631473 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/default-k8s-diff-port-759234/client.key: {Name:mke8a415d0af374cf9fe8570e6fe4c7202332109 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 20:00:54.618032  631473 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/default-k8s-diff-port-759234/apiserver.key.e1807167
	I1217 20:00:54.618049  631473 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/default-k8s-diff-port-759234/apiserver.crt.e1807167 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.94.2]
	I1217 20:00:54.665685  631473 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/default-k8s-diff-port-759234/apiserver.crt.e1807167 ...
	I1217 20:00:54.665716  631473 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/default-k8s-diff-port-759234/apiserver.crt.e1807167: {Name:mkfcccc5ab764237ebc01d7e772bd39ad2e57805 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 20:00:54.665884  631473 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/default-k8s-diff-port-759234/apiserver.key.e1807167 ...
	I1217 20:00:54.665904  631473 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/default-k8s-diff-port-759234/apiserver.key.e1807167: {Name:mk4c6de11c85c3fb77bd1f278ce0e0fd2b33aff3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 20:00:54.666008  631473 certs.go:382] copying /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/default-k8s-diff-port-759234/apiserver.crt.e1807167 -> /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/default-k8s-diff-port-759234/apiserver.crt
	I1217 20:00:54.666104  631473 certs.go:386] copying /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/default-k8s-diff-port-759234/apiserver.key.e1807167 -> /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/default-k8s-diff-port-759234/apiserver.key
	I1217 20:00:54.666162  631473 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/default-k8s-diff-port-759234/proxy-client.key
	I1217 20:00:54.666178  631473 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/default-k8s-diff-port-759234/proxy-client.crt with IP's: []
	I1217 20:00:54.735423  631473 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/default-k8s-diff-port-759234/proxy-client.crt ...
	I1217 20:00:54.735452  631473 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/default-k8s-diff-port-759234/proxy-client.crt: {Name:mk6946a87226d60c386ab3fc364ed99a58d10cba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 20:00:54.735624  631473 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/default-k8s-diff-port-759234/proxy-client.key ...
	I1217 20:00:54.735638  631473 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/default-k8s-diff-port-759234/proxy-client.key: {Name:mk6cae84f91184f3a12c3274f32b7e32ae6eea78 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 20:00:54.735804  631473 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-372245/.minikube/certs/375797.pem (1338 bytes)
	W1217 20:00:54.735844  631473 certs.go:480] ignoring /home/jenkins/minikube-integration/22186-372245/.minikube/certs/375797_empty.pem, impossibly tiny 0 bytes
	I1217 20:00:54.735855  631473 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-372245/.minikube/certs/ca-key.pem (1675 bytes)
	I1217 20:00:54.735877  631473 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-372245/.minikube/certs/ca.pem (1082 bytes)
	I1217 20:00:54.735901  631473 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-372245/.minikube/certs/cert.pem (1123 bytes)
	I1217 20:00:54.735925  631473 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-372245/.minikube/certs/key.pem (1675 bytes)
	I1217 20:00:54.735974  631473 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-372245/.minikube/files/etc/ssl/certs/3757972.pem (1708 bytes)
	I1217 20:00:54.736625  631473 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1217 20:00:54.756198  631473 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1217 20:00:54.773753  631473 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1217 20:00:54.791250  631473 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1217 20:00:54.809439  631473 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/default-k8s-diff-port-759234/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1217 20:00:54.828101  631473 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/default-k8s-diff-port-759234/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1217 20:00:54.847713  631473 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/default-k8s-diff-port-759234/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1217 20:00:54.866560  631473 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/default-k8s-diff-port-759234/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1217 20:00:54.885184  631473 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/files/etc/ssl/certs/3757972.pem --> /usr/share/ca-certificates/3757972.pem (1708 bytes)
	I1217 20:00:54.906455  631473 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 20:00:54.924265  631473 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/certs/375797.pem --> /usr/share/ca-certificates/375797.pem (1338 bytes)
	I1217 20:00:54.942817  631473 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1217 20:00:54.956309  631473 ssh_runner.go:195] Run: openssl version
	I1217 20:00:54.962641  631473 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3757972.pem
	I1217 20:00:54.971170  631473 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3757972.pem /etc/ssl/certs/3757972.pem
	I1217 20:00:54.979233  631473 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3757972.pem
	I1217 20:00:54.983177  631473 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 17 19:32 /usr/share/ca-certificates/3757972.pem
	I1217 20:00:54.983245  631473 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3757972.pem
	I1217 20:00:55.018977  631473 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1217 20:00:55.027253  631473 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/3757972.pem /etc/ssl/certs/3ec20f2e.0
	I1217 20:00:55.035165  631473 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:00:55.043017  631473 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1217 20:00:55.051440  631473 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:00:55.055458  631473 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 19:24 /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:00:55.055523  631473 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:00:55.092379  631473 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1217 20:00:55.101231  631473 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1217 20:00:55.111064  631473 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/375797.pem
	I1217 20:00:55.119199  631473 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/375797.pem /etc/ssl/certs/375797.pem
	I1217 20:00:55.127063  631473 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/375797.pem
	I1217 20:00:55.130993  631473 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 17 19:32 /usr/share/ca-certificates/375797.pem
	I1217 20:00:55.131062  631473 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/375797.pem
	I1217 20:00:55.165321  631473 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1217 20:00:55.173294  631473 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/375797.pem /etc/ssl/certs/51391683.0
	I1217 20:00:55.181422  631473 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1217 20:00:55.185376  631473 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1217 20:00:55.185448  631473 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-759234 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:default-k8s-diff-port-759234 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cust
omQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 20:00:55.185546  631473 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1217 20:00:55.185607  631473 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 20:00:55.217477  631473 cri.go:89] found id: ""
	I1217 20:00:55.217551  631473 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1217 20:00:55.226933  631473 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1217 20:00:55.236854  631473 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1217 20:00:55.236934  631473 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1217 20:00:55.245579  631473 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1217 20:00:55.245602  631473 kubeadm.go:158] found existing configuration files:
	
	I1217 20:00:55.245652  631473 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1217 20:00:55.253938  631473 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1217 20:00:55.253998  631473 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1217 20:00:55.261865  631473 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1217 20:00:55.269887  631473 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1217 20:00:55.269992  631473 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1217 20:00:55.278000  631473 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1217 20:00:55.286714  631473 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1217 20:00:55.286788  631473 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1217 20:00:55.296035  631473 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1217 20:00:55.305037  631473 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1217 20:00:55.305131  631473 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1217 20:00:55.312998  631473 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1217 20:00:55.373971  631473 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1045-gcp\n", err: exit status 1
	I1217 20:00:55.436480  631473 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W1217 20:00:56.071929  625400 pod_ready.go:104] pod "coredns-5dd5756b68-gbhs5" is not "Ready", error: <nil>
	W1217 20:00:58.571128  625400 pod_ready.go:104] pod "coredns-5dd5756b68-gbhs5" is not "Ready", error: <nil>
	W1217 20:00:56.104574  624471 pod_ready.go:104] pod "coredns-7d764666f9-988jw" is not "Ready", error: <nil>
	W1217 20:00:58.604838  624471 pod_ready.go:104] pod "coredns-7d764666f9-988jw" is not "Ready", error: <nil>
	W1217 20:00:57.277327  596882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 20:00:57.277349  596882 logs.go:123] Gathering logs for kube-apiserver [6822d1aff73905867cd00c8bd3d996a8d98a37c238f53bab351d576f0d6b34fc] ...
	I1217 20:00:57.277366  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6822d1aff73905867cd00c8bd3d996a8d98a37c238f53bab351d576f0d6b34fc"
	I1217 20:00:59.811179  596882 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	W1217 20:01:01.071960  625400 pod_ready.go:104] pod "coredns-5dd5756b68-gbhs5" is not "Ready", error: <nil>
	W1217 20:01:03.571727  625400 pod_ready.go:104] pod "coredns-5dd5756b68-gbhs5" is not "Ready", error: <nil>
	W1217 20:01:00.604975  624471 pod_ready.go:104] pod "coredns-7d764666f9-988jw" is not "Ready", error: <nil>
	W1217 20:01:02.605263  624471 pod_ready.go:104] pod "coredns-7d764666f9-988jw" is not "Ready", error: <nil>
	W1217 20:01:05.106561  624471 pod_ready.go:104] pod "coredns-7d764666f9-988jw" is not "Ready", error: <nil>
	I1217 20:01:06.067126  631473 kubeadm.go:319] [init] Using Kubernetes version: v1.34.3
	I1217 20:01:06.067196  631473 kubeadm.go:319] [preflight] Running pre-flight checks
	I1217 20:01:06.067312  631473 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1217 20:01:06.067401  631473 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1045-gcp
	I1217 20:01:06.067442  631473 kubeadm.go:319] OS: Linux
	I1217 20:01:06.067513  631473 kubeadm.go:319] CGROUPS_CPU: enabled
	I1217 20:01:06.067558  631473 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1217 20:01:06.067635  631473 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1217 20:01:06.067697  631473 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1217 20:01:06.067738  631473 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1217 20:01:06.067813  631473 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1217 20:01:06.067880  631473 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1217 20:01:06.067957  631473 kubeadm.go:319] CGROUPS_IO: enabled
	I1217 20:01:06.068050  631473 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1217 20:01:06.068197  631473 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1217 20:01:06.068340  631473 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1217 20:01:06.068462  631473 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1217 20:01:06.070305  631473 out.go:252]   - Generating certificates and keys ...
	I1217 20:01:06.070395  631473 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1217 20:01:06.070458  631473 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1217 20:01:06.070524  631473 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1217 20:01:06.070580  631473 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1217 20:01:06.070634  631473 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1217 20:01:06.070675  631473 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1217 20:01:06.070722  631473 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1217 20:01:06.070887  631473 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [default-k8s-diff-port-759234 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1217 20:01:06.070954  631473 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1217 20:01:06.071106  631473 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [default-k8s-diff-port-759234 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1217 20:01:06.071215  631473 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1217 20:01:06.071290  631473 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1217 20:01:06.071343  631473 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1217 20:01:06.071423  631473 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1217 20:01:06.071499  631473 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1217 20:01:06.071573  631473 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1217 20:01:06.071647  631473 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1217 20:01:06.071757  631473 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1217 20:01:06.071841  631473 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1217 20:01:06.071959  631473 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1217 20:01:06.072065  631473 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1217 20:01:06.073367  631473 out.go:252]   - Booting up control plane ...
	I1217 20:01:06.073455  631473 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1217 20:01:06.073530  631473 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1217 20:01:06.073591  631473 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1217 20:01:06.073692  631473 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1217 20:01:06.073789  631473 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1217 20:01:06.073886  631473 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1217 20:01:06.073960  631473 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1217 20:01:06.074002  631473 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1217 20:01:06.074140  631473 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1217 20:01:06.074228  631473 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1217 20:01:06.074276  631473 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001922128s
	I1217 20:01:06.074352  631473 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1217 20:01:06.074416  631473 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.94.2:8444/livez
	I1217 20:01:06.074487  631473 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1217 20:01:06.074549  631473 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1217 20:01:06.074624  631473 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.929603333s
	I1217 20:01:06.074691  631473 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.11807832s
	I1217 20:01:06.074783  631473 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.002138646s
	I1217 20:01:06.074883  631473 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1217 20:01:06.074999  631473 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1217 20:01:06.075046  631473 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1217 20:01:06.075233  631473 kubeadm.go:319] [mark-control-plane] Marking the node default-k8s-diff-port-759234 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1217 20:01:06.075296  631473 kubeadm.go:319] [bootstrap-token] Using token: v6m366.ufgpfn05m87tgdpr
	I1217 20:01:06.076758  631473 out.go:252]   - Configuring RBAC rules ...
	I1217 20:01:06.076848  631473 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1217 20:01:06.076928  631473 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1217 20:01:06.077189  631473 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1217 20:01:06.077365  631473 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1217 20:01:06.077488  631473 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1217 20:01:06.077579  631473 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1217 20:01:06.077727  631473 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1217 20:01:06.077797  631473 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1217 20:01:06.077864  631473 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1217 20:01:06.077879  631473 kubeadm.go:319] 
	I1217 20:01:06.077952  631473 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1217 20:01:06.077959  631473 kubeadm.go:319] 
	I1217 20:01:06.078019  631473 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1217 20:01:06.078028  631473 kubeadm.go:319] 
	I1217 20:01:06.078048  631473 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1217 20:01:06.078140  631473 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1217 20:01:06.078221  631473 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1217 20:01:06.078230  631473 kubeadm.go:319] 
	I1217 20:01:06.078313  631473 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1217 20:01:06.078322  631473 kubeadm.go:319] 
	I1217 20:01:06.078396  631473 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1217 20:01:06.078404  631473 kubeadm.go:319] 
	I1217 20:01:06.078487  631473 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1217 20:01:06.078589  631473 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1217 20:01:06.078685  631473 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1217 20:01:06.078694  631473 kubeadm.go:319] 
	I1217 20:01:06.078778  631473 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1217 20:01:06.078851  631473 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1217 20:01:06.078857  631473 kubeadm.go:319] 
	I1217 20:01:06.078933  631473 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8444 --token v6m366.ufgpfn05m87tgdpr \
	I1217 20:01:06.079036  631473 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:8ef867ecc15c7bd9eb9f87ba84e4b5e1f9c90bbe1fbebab60bd7b5b08cd9129f \
	I1217 20:01:06.079057  631473 kubeadm.go:319] 	--control-plane 
	I1217 20:01:06.079060  631473 kubeadm.go:319] 
	I1217 20:01:06.079150  631473 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1217 20:01:06.079160  631473 kubeadm.go:319] 
	I1217 20:01:06.079259  631473 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8444 --token v6m366.ufgpfn05m87tgdpr \
	I1217 20:01:06.079417  631473 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:8ef867ecc15c7bd9eb9f87ba84e4b5e1f9c90bbe1fbebab60bd7b5b08cd9129f 
	I1217 20:01:06.079446  631473 cni.go:84] Creating CNI manager for ""
	I1217 20:01:06.079457  631473 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1217 20:01:06.081231  631473 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1217 20:01:04.812163  596882 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1217 20:01:04.812235  596882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 20:01:04.812292  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 20:01:04.844291  596882 cri.go:89] found id: "dfcf129a23a9b4b8338549662d03dc9674e70494089b9acbd56ee28dd0e59a2e"
	I1217 20:01:04.844315  596882 cri.go:89] found id: "6822d1aff73905867cd00c8bd3d996a8d98a37c238f53bab351d576f0d6b34fc"
	I1217 20:01:04.844319  596882 cri.go:89] found id: ""
	I1217 20:01:04.844328  596882 logs.go:282] 2 containers: [dfcf129a23a9b4b8338549662d03dc9674e70494089b9acbd56ee28dd0e59a2e 6822d1aff73905867cd00c8bd3d996a8d98a37c238f53bab351d576f0d6b34fc]
	I1217 20:01:04.844385  596882 ssh_runner.go:195] Run: which crictl
	I1217 20:01:04.848366  596882 ssh_runner.go:195] Run: which crictl
	I1217 20:01:04.852177  596882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 20:01:04.852256  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 20:01:04.883987  596882 cri.go:89] found id: ""
	I1217 20:01:04.884024  596882 logs.go:282] 0 containers: []
	W1217 20:01:04.884038  596882 logs.go:284] No container was found matching "etcd"
	I1217 20:01:04.884051  596882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 20:01:04.884140  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 20:01:04.914990  596882 cri.go:89] found id: ""
	I1217 20:01:04.915020  596882 logs.go:282] 0 containers: []
	W1217 20:01:04.915031  596882 logs.go:284] No container was found matching "coredns"
	I1217 20:01:04.915040  596882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 20:01:04.915135  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 20:01:04.944932  596882 cri.go:89] found id: "26afbca819064c614a7c269e4fbe3f73beb12920c9989c7a9adca8a87b8aee29"
	I1217 20:01:04.944965  596882 cri.go:89] found id: ""
	I1217 20:01:04.944978  596882 logs.go:282] 1 containers: [26afbca819064c614a7c269e4fbe3f73beb12920c9989c7a9adca8a87b8aee29]
	I1217 20:01:04.945047  596882 ssh_runner.go:195] Run: which crictl
	I1217 20:01:04.949407  596882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 20:01:04.949476  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 20:01:04.980714  596882 cri.go:89] found id: ""
	I1217 20:01:04.980744  596882 logs.go:282] 0 containers: []
	W1217 20:01:04.980756  596882 logs.go:284] No container was found matching "kube-proxy"
	I1217 20:01:04.980765  596882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 20:01:04.980827  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 20:01:05.014278  596882 cri.go:89] found id: "711081a1b65cc9754b1a9b8fd19fce7769b6a8e65b097e062aa1703f24e1a476"
	I1217 20:01:05.014303  596882 cri.go:89] found id: "deb0ef3d09cc535bcd10a8ecc98a8afc0243fdcaf4256b36cc91b5d3e2c3810c"
	I1217 20:01:05.014306  596882 cri.go:89] found id: ""
	I1217 20:01:05.014315  596882 logs.go:282] 2 containers: [711081a1b65cc9754b1a9b8fd19fce7769b6a8e65b097e062aa1703f24e1a476 deb0ef3d09cc535bcd10a8ecc98a8afc0243fdcaf4256b36cc91b5d3e2c3810c]
	I1217 20:01:05.014369  596882 ssh_runner.go:195] Run: which crictl
	I1217 20:01:05.019212  596882 ssh_runner.go:195] Run: which crictl
	I1217 20:01:05.023605  596882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 20:01:05.023688  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 20:01:05.054178  596882 cri.go:89] found id: ""
	I1217 20:01:05.054210  596882 logs.go:282] 0 containers: []
	W1217 20:01:05.054220  596882 logs.go:284] No container was found matching "kindnet"
	I1217 20:01:05.054226  596882 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1217 20:01:05.054297  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 20:01:05.089365  596882 cri.go:89] found id: ""
	I1217 20:01:05.089398  596882 logs.go:282] 0 containers: []
	W1217 20:01:05.089410  596882 logs.go:284] No container was found matching "storage-provisioner"
	I1217 20:01:05.089432  596882 logs.go:123] Gathering logs for container status ...
	I1217 20:01:05.089451  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 20:01:05.129946  596882 logs.go:123] Gathering logs for kubelet ...
	I1217 20:01:05.129977  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 20:01:05.229093  596882 logs.go:123] Gathering logs for describe nodes ...
	I1217 20:01:05.229136  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1217 20:01:06.082676  631473 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1217 20:01:06.087568  631473 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.3/kubectl ...
	I1217 20:01:06.087588  631473 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2620 bytes)
	I1217 20:01:06.101995  631473 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1217 20:01:06.315905  631473 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1217 20:01:06.315984  631473 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 20:01:06.315984  631473 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-759234 minikube.k8s.io/updated_at=2025_12_17T20_01_06_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=2e96f676eb7e96389e85fe0658a4ede4c4ba6924 minikube.k8s.io/name=default-k8s-diff-port-759234 minikube.k8s.io/primary=true
	I1217 20:01:06.327829  631473 ops.go:34] apiserver oom_adj: -16
	I1217 20:01:06.396458  631473 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 20:01:06.897042  631473 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 20:01:07.396599  631473 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 20:01:07.604674  624471 pod_ready.go:94] pod "coredns-7d764666f9-988jw" is "Ready"
	I1217 20:01:07.604701  624471 pod_ready.go:86] duration metric: took 37.00583192s for pod "coredns-7d764666f9-988jw" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:01:07.607174  624471 pod_ready.go:83] waiting for pod "etcd-no-preload-832842" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:01:07.611282  624471 pod_ready.go:94] pod "etcd-no-preload-832842" is "Ready"
	I1217 20:01:07.611311  624471 pod_ready.go:86] duration metric: took 4.112039ms for pod "etcd-no-preload-832842" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:01:07.613297  624471 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-832842" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:01:07.617064  624471 pod_ready.go:94] pod "kube-apiserver-no-preload-832842" is "Ready"
	I1217 20:01:07.617117  624471 pod_ready.go:86] duration metric: took 3.797766ms for pod "kube-apiserver-no-preload-832842" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:01:07.619212  624471 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-832842" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:01:07.803328  624471 pod_ready.go:94] pod "kube-controller-manager-no-preload-832842" is "Ready"
	I1217 20:01:07.803357  624471 pod_ready.go:86] duration metric: took 184.117172ms for pod "kube-controller-manager-no-preload-832842" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:01:08.003550  624471 pod_ready.go:83] waiting for pod "kube-proxy-jc5dd" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:01:08.403261  624471 pod_ready.go:94] pod "kube-proxy-jc5dd" is "Ready"
	I1217 20:01:08.403288  624471 pod_ready.go:86] duration metric: took 399.709625ms for pod "kube-proxy-jc5dd" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:01:08.603502  624471 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-832842" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:01:09.002875  624471 pod_ready.go:94] pod "kube-scheduler-no-preload-832842" is "Ready"
	I1217 20:01:09.002905  624471 pod_ready.go:86] duration metric: took 399.378114ms for pod "kube-scheduler-no-preload-832842" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:01:09.002919  624471 pod_ready.go:40] duration metric: took 38.408153316s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1217 20:01:09.051128  624471 start.go:625] kubectl: 1.35.0, cluster: 1.35.0-rc.1 (minor skew: 0)
	I1217 20:01:09.053534  624471 out.go:179] * Done! kubectl is now configured to use "no-preload-832842" cluster and "default" namespace by default
	W1217 20:01:06.072320  625400 pod_ready.go:104] pod "coredns-5dd5756b68-gbhs5" is not "Ready", error: <nil>
	W1217 20:01:08.571546  625400 pod_ready.go:104] pod "coredns-5dd5756b68-gbhs5" is not "Ready", error: <nil>
	I1217 20:01:07.897116  631473 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 20:01:08.397124  631473 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 20:01:08.897399  631473 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 20:01:09.397296  631473 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 20:01:09.897202  631473 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 20:01:10.397310  631473 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 20:01:10.897175  631473 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 20:01:10.975504  631473 kubeadm.go:1114] duration metric: took 4.659591269s to wait for elevateKubeSystemPrivileges
	I1217 20:01:10.975540  631473 kubeadm.go:403] duration metric: took 15.790098497s to StartCluster
	I1217 20:01:10.975558  631473 settings.go:142] acquiring lock: {Name:mk01c60672ff2b8f50b037d6096a0a4590636830 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 20:01:10.975646  631473 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22186-372245/kubeconfig
	I1217 20:01:10.977547  631473 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-372245/kubeconfig: {Name:mkbe8926b9014d2af611aee93b1188b72880b6c1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 20:01:10.977796  631473 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1217 20:01:10.977817  631473 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1217 20:01:10.977867  631473 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1217 20:01:10.978006  631473 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-759234"
	I1217 20:01:10.978029  631473 config.go:182] Loaded profile config "default-k8s-diff-port-759234": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 20:01:10.978054  631473 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-759234"
	I1217 20:01:10.978101  631473 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-759234"
	I1217 20:01:10.978031  631473 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-759234"
	I1217 20:01:10.978248  631473 host.go:66] Checking if "default-k8s-diff-port-759234" exists ...
	I1217 20:01:10.978539  631473 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-759234 --format={{.State.Status}}
	I1217 20:01:10.978747  631473 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-759234 --format={{.State.Status}}
	I1217 20:01:10.979515  631473 out.go:179] * Verifying Kubernetes components...
	I1217 20:01:10.980948  631473 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 20:01:11.004351  631473 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1217 20:01:09.570523  625400 pod_ready.go:94] pod "coredns-5dd5756b68-gbhs5" is "Ready"
	I1217 20:01:09.570551  625400 pod_ready.go:86] duration metric: took 34.005219617s for pod "coredns-5dd5756b68-gbhs5" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:01:09.573051  625400 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-894575" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:01:09.576701  625400 pod_ready.go:94] pod "etcd-old-k8s-version-894575" is "Ready"
	I1217 20:01:09.576725  625400 pod_ready.go:86] duration metric: took 3.651465ms for pod "etcd-old-k8s-version-894575" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:01:09.579243  625400 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-894575" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:01:09.583452  625400 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-894575" is "Ready"
	I1217 20:01:09.583478  625400 pod_ready.go:86] duration metric: took 4.213779ms for pod "kube-apiserver-old-k8s-version-894575" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:01:09.585997  625400 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-894575" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:01:09.768942  625400 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-894575" is "Ready"
	I1217 20:01:09.768977  625400 pod_ready.go:86] duration metric: took 182.957254ms for pod "kube-controller-manager-old-k8s-version-894575" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:01:09.970200  625400 pod_ready.go:83] waiting for pod "kube-proxy-bdzb6" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:01:10.368408  625400 pod_ready.go:94] pod "kube-proxy-bdzb6" is "Ready"
	I1217 20:01:10.368435  625400 pod_ready.go:86] duration metric: took 398.20631ms for pod "kube-proxy-bdzb6" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:01:10.569794  625400 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-894575" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:01:10.969210  625400 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-894575" is "Ready"
	I1217 20:01:10.969252  625400 pod_ready.go:86] duration metric: took 399.426249ms for pod "kube-scheduler-old-k8s-version-894575" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:01:10.969270  625400 pod_ready.go:40] duration metric: took 35.409804659s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1217 20:01:11.041190  625400 start.go:625] kubectl: 1.35.0, cluster: 1.28.0 (minor skew: 7)
	I1217 20:01:11.044208  625400 out.go:203] 
	W1217 20:01:11.045630  625400 out.go:285] ! /usr/local/bin/kubectl is version 1.35.0, which may have incompatibilities with Kubernetes 1.28.0.
	I1217 20:01:11.047652  625400 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1217 20:01:11.049163  625400 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-894575" cluster and "default" namespace by default
	I1217 20:01:11.005141  631473 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-759234"
	I1217 20:01:11.005190  631473 host.go:66] Checking if "default-k8s-diff-port-759234" exists ...
	I1217 20:01:11.005673  631473 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-759234 --format={{.State.Status}}
	I1217 20:01:11.005685  631473 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 20:01:11.005702  631473 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1217 20:01:11.005753  631473 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-759234
	I1217 20:01:11.034589  631473 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33453 SSHKeyPath:/home/jenkins/minikube-integration/22186-372245/.minikube/machines/default-k8s-diff-port-759234/id_rsa Username:docker}
	I1217 20:01:11.037037  631473 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1217 20:01:11.037065  631473 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1217 20:01:11.037212  631473 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-759234
	I1217 20:01:11.065091  631473 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33453 SSHKeyPath:/home/jenkins/minikube-integration/22186-372245/.minikube/machines/default-k8s-diff-port-759234/id_rsa Username:docker}
	I1217 20:01:11.078156  631473 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.94.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1217 20:01:11.158438  631473 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 20:01:11.173742  631473 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 20:01:11.214719  631473 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1217 20:01:11.376291  631473 start.go:977] {"host.minikube.internal": 192.168.94.1} host record injected into CoreDNS's ConfigMap
	I1217 20:01:11.376906  631473 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-759234" to be "Ready" ...
	I1217 20:01:11.616252  631473 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1217 20:01:11.617452  631473 addons.go:530] duration metric: took 639.583404ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1217 20:01:11.880698  631473 kapi.go:214] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-759234" context rescaled to 1 replicas
	I1217 20:01:15.295985  596882 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (10.066827019s)
	W1217 20:01:15.296022  596882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Unable to connect to the server: net/http: TLS handshake timeout
	 output: 
	** stderr ** 
	Unable to connect to the server: net/http: TLS handshake timeout
	
	** /stderr **
	I1217 20:01:15.296032  596882 logs.go:123] Gathering logs for kube-apiserver [6822d1aff73905867cd00c8bd3d996a8d98a37c238f53bab351d576f0d6b34fc] ...
	I1217 20:01:15.296044  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6822d1aff73905867cd00c8bd3d996a8d98a37c238f53bab351d576f0d6b34fc"
	I1217 20:01:15.329910  596882 logs.go:123] Gathering logs for kube-controller-manager [deb0ef3d09cc535bcd10a8ecc98a8afc0243fdcaf4256b36cc91b5d3e2c3810c] ...
	I1217 20:01:15.329943  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 deb0ef3d09cc535bcd10a8ecc98a8afc0243fdcaf4256b36cc91b5d3e2c3810c"
	I1217 20:01:15.361430  596882 logs.go:123] Gathering logs for dmesg ...
	I1217 20:01:15.361465  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 20:01:15.379135  596882 logs.go:123] Gathering logs for kube-apiserver [dfcf129a23a9b4b8338549662d03dc9674e70494089b9acbd56ee28dd0e59a2e] ...
	I1217 20:01:15.379176  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 dfcf129a23a9b4b8338549662d03dc9674e70494089b9acbd56ee28dd0e59a2e"
	I1217 20:01:15.413631  596882 logs.go:123] Gathering logs for kube-scheduler [26afbca819064c614a7c269e4fbe3f73beb12920c9989c7a9adca8a87b8aee29] ...
	I1217 20:01:15.413671  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 26afbca819064c614a7c269e4fbe3f73beb12920c9989c7a9adca8a87b8aee29"
	I1217 20:01:15.444072  596882 logs.go:123] Gathering logs for kube-controller-manager [711081a1b65cc9754b1a9b8fd19fce7769b6a8e65b097e062aa1703f24e1a476] ...
	I1217 20:01:15.444120  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 711081a1b65cc9754b1a9b8fd19fce7769b6a8e65b097e062aa1703f24e1a476"
	I1217 20:01:15.474296  596882 logs.go:123] Gathering logs for CRI-O ...
	I1217 20:01:15.474325  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	W1217 20:01:13.379733  631473 node_ready.go:57] node "default-k8s-diff-port-759234" has "Ready":"False" status (will retry)
	W1217 20:01:15.380677  631473 node_ready.go:57] node "default-k8s-diff-port-759234" has "Ready":"False" status (will retry)
	W1217 20:01:17.382167  631473 node_ready.go:57] node "default-k8s-diff-port-759234" has "Ready":"False" status (will retry)
	I1217 20:01:18.028829  596882 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1217 20:01:19.268145  596882 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": read tcp 192.168.76.1:48746->192.168.76.2:8443: read: connection reset by peer
	I1217 20:01:19.268222  596882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 20:01:19.268292  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 20:01:19.297951  596882 cri.go:89] found id: "dfcf129a23a9b4b8338549662d03dc9674e70494089b9acbd56ee28dd0e59a2e"
	I1217 20:01:19.297972  596882 cri.go:89] found id: "6822d1aff73905867cd00c8bd3d996a8d98a37c238f53bab351d576f0d6b34fc"
	I1217 20:01:19.297976  596882 cri.go:89] found id: ""
	I1217 20:01:19.297984  596882 logs.go:282] 2 containers: [dfcf129a23a9b4b8338549662d03dc9674e70494089b9acbd56ee28dd0e59a2e 6822d1aff73905867cd00c8bd3d996a8d98a37c238f53bab351d576f0d6b34fc]
	I1217 20:01:19.298048  596882 ssh_runner.go:195] Run: which crictl
	I1217 20:01:19.302214  596882 ssh_runner.go:195] Run: which crictl
	I1217 20:01:19.305947  596882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 20:01:19.306014  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 20:01:19.333763  596882 cri.go:89] found id: ""
	I1217 20:01:19.333789  596882 logs.go:282] 0 containers: []
	W1217 20:01:19.333798  596882 logs.go:284] No container was found matching "etcd"
	I1217 20:01:19.333804  596882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 20:01:19.333864  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 20:01:19.362644  596882 cri.go:89] found id: ""
	I1217 20:01:19.362672  596882 logs.go:282] 0 containers: []
	W1217 20:01:19.362682  596882 logs.go:284] No container was found matching "coredns"
	I1217 20:01:19.362687  596882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 20:01:19.362752  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 20:01:19.394030  596882 cri.go:89] found id: "26afbca819064c614a7c269e4fbe3f73beb12920c9989c7a9adca8a87b8aee29"
	I1217 20:01:19.394059  596882 cri.go:89] found id: ""
	I1217 20:01:19.394071  596882 logs.go:282] 1 containers: [26afbca819064c614a7c269e4fbe3f73beb12920c9989c7a9adca8a87b8aee29]
	I1217 20:01:19.394157  596882 ssh_runner.go:195] Run: which crictl
	I1217 20:01:19.398506  596882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 20:01:19.398583  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 20:01:19.425535  596882 cri.go:89] found id: ""
	I1217 20:01:19.425560  596882 logs.go:282] 0 containers: []
	W1217 20:01:19.425569  596882 logs.go:284] No container was found matching "kube-proxy"
	I1217 20:01:19.425575  596882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 20:01:19.425638  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 20:01:19.454704  596882 cri.go:89] found id: "711081a1b65cc9754b1a9b8fd19fce7769b6a8e65b097e062aa1703f24e1a476"
	I1217 20:01:19.454726  596882 cri.go:89] found id: "deb0ef3d09cc535bcd10a8ecc98a8afc0243fdcaf4256b36cc91b5d3e2c3810c"
	I1217 20:01:19.454731  596882 cri.go:89] found id: ""
	I1217 20:01:19.454743  596882 logs.go:282] 2 containers: [711081a1b65cc9754b1a9b8fd19fce7769b6a8e65b097e062aa1703f24e1a476 deb0ef3d09cc535bcd10a8ecc98a8afc0243fdcaf4256b36cc91b5d3e2c3810c]
	I1217 20:01:19.454811  596882 ssh_runner.go:195] Run: which crictl
	I1217 20:01:19.459054  596882 ssh_runner.go:195] Run: which crictl
	I1217 20:01:19.463029  596882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 20:01:19.463111  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 20:01:19.491583  596882 cri.go:89] found id: ""
	I1217 20:01:19.491610  596882 logs.go:282] 0 containers: []
	W1217 20:01:19.491622  596882 logs.go:284] No container was found matching "kindnet"
	I1217 20:01:19.491631  596882 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1217 20:01:19.491688  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 20:01:19.520292  596882 cri.go:89] found id: ""
	I1217 20:01:19.520328  596882 logs.go:282] 0 containers: []
	W1217 20:01:19.520341  596882 logs.go:284] No container was found matching "storage-provisioner"
	I1217 20:01:19.520364  596882 logs.go:123] Gathering logs for kubelet ...
	I1217 20:01:19.520390  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 20:01:19.604632  596882 logs.go:123] Gathering logs for dmesg ...
	I1217 20:01:19.604674  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 20:01:19.621452  596882 logs.go:123] Gathering logs for describe nodes ...
	I1217 20:01:19.621486  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 20:01:19.680554  596882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 20:01:19.680581  596882 logs.go:123] Gathering logs for kube-apiserver [dfcf129a23a9b4b8338549662d03dc9674e70494089b9acbd56ee28dd0e59a2e] ...
	I1217 20:01:19.680597  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 dfcf129a23a9b4b8338549662d03dc9674e70494089b9acbd56ee28dd0e59a2e"
	I1217 20:01:19.712658  596882 logs.go:123] Gathering logs for kube-controller-manager [deb0ef3d09cc535bcd10a8ecc98a8afc0243fdcaf4256b36cc91b5d3e2c3810c] ...
	I1217 20:01:19.712693  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 deb0ef3d09cc535bcd10a8ecc98a8afc0243fdcaf4256b36cc91b5d3e2c3810c"
	I1217 20:01:19.740964  596882 logs.go:123] Gathering logs for container status ...
	I1217 20:01:19.740997  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 20:01:19.773014  596882 logs.go:123] Gathering logs for kube-apiserver [6822d1aff73905867cd00c8bd3d996a8d98a37c238f53bab351d576f0d6b34fc] ...
	I1217 20:01:19.773045  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6822d1aff73905867cd00c8bd3d996a8d98a37c238f53bab351d576f0d6b34fc"
	W1217 20:01:19.802765  596882 logs.go:130] failed kube-apiserver [6822d1aff73905867cd00c8bd3d996a8d98a37c238f53bab351d576f0d6b34fc]: command: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6822d1aff73905867cd00c8bd3d996a8d98a37c238f53bab351d576f0d6b34fc" /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6822d1aff73905867cd00c8bd3d996a8d98a37c238f53bab351d576f0d6b34fc": Process exited with status 1
	stdout:
	
	stderr:
	E1217 20:01:19.800342    5778 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6822d1aff73905867cd00c8bd3d996a8d98a37c238f53bab351d576f0d6b34fc\": container with ID starting with 6822d1aff73905867cd00c8bd3d996a8d98a37c238f53bab351d576f0d6b34fc not found: ID does not exist" containerID="6822d1aff73905867cd00c8bd3d996a8d98a37c238f53bab351d576f0d6b34fc"
	time="2025-12-17T20:01:19Z" level=fatal msg="rpc error: code = NotFound desc = could not find container \"6822d1aff73905867cd00c8bd3d996a8d98a37c238f53bab351d576f0d6b34fc\": container with ID starting with 6822d1aff73905867cd00c8bd3d996a8d98a37c238f53bab351d576f0d6b34fc not found: ID does not exist"
	 output: 
	** stderr ** 
	E1217 20:01:19.800342    5778 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6822d1aff73905867cd00c8bd3d996a8d98a37c238f53bab351d576f0d6b34fc\": container with ID starting with 6822d1aff73905867cd00c8bd3d996a8d98a37c238f53bab351d576f0d6b34fc not found: ID does not exist" containerID="6822d1aff73905867cd00c8bd3d996a8d98a37c238f53bab351d576f0d6b34fc"
	time="2025-12-17T20:01:19Z" level=fatal msg="rpc error: code = NotFound desc = could not find container \"6822d1aff73905867cd00c8bd3d996a8d98a37c238f53bab351d576f0d6b34fc\": container with ID starting with 6822d1aff73905867cd00c8bd3d996a8d98a37c238f53bab351d576f0d6b34fc not found: ID does not exist"
	
	** /stderr **
	I1217 20:01:19.802797  596882 logs.go:123] Gathering logs for kube-scheduler [26afbca819064c614a7c269e4fbe3f73beb12920c9989c7a9adca8a87b8aee29] ...
	I1217 20:01:19.802814  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 26afbca819064c614a7c269e4fbe3f73beb12920c9989c7a9adca8a87b8aee29"
	I1217 20:01:19.830245  596882 logs.go:123] Gathering logs for kube-controller-manager [711081a1b65cc9754b1a9b8fd19fce7769b6a8e65b097e062aa1703f24e1a476] ...
	I1217 20:01:19.830272  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 711081a1b65cc9754b1a9b8fd19fce7769b6a8e65b097e062aa1703f24e1a476"
	I1217 20:01:19.857816  596882 logs.go:123] Gathering logs for CRI-O ...
	I1217 20:01:19.857846  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	W1217 20:01:19.879976  631473 node_ready.go:57] node "default-k8s-diff-port-759234" has "Ready":"False" status (will retry)
	W1217 20:01:21.880734  631473 node_ready.go:57] node "default-k8s-diff-port-759234" has "Ready":"False" status (will retry)
	
	
	==> CRI-O <==
	Dec 17 20:00:49 no-preload-832842 crio[568]: time="2025-12-17T20:00:49.041013202Z" level=info msg="Started container" PID=1764 containerID=55ca2ad24b8a2ee9241203fdd178b54f929582e37041dd86d79b3f677841a5ce description=kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-zjc4j/dashboard-metrics-scraper id=b55325a1-724e-40af-994c-8eb65574bf9b name=/runtime.v1.RuntimeService/StartContainer sandboxID=ec15d0feb093f825afaffc6a197d0ff3ecd9a66fddff8fb31f9437971f51b5ea
	Dec 17 20:00:49 no-preload-832842 crio[568]: time="2025-12-17T20:00:49.091907515Z" level=info msg="Removing container: 6d2c7a993ad05ebd47e395b0a8846c2cd798e6411ba252f85d1948f3688548f5" id=448415fe-e3cf-4d07-894f-b257b64ed1b6 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 17 20:00:49 no-preload-832842 crio[568]: time="2025-12-17T20:00:49.102848057Z" level=info msg="Removed container 6d2c7a993ad05ebd47e395b0a8846c2cd798e6411ba252f85d1948f3688548f5: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-zjc4j/dashboard-metrics-scraper" id=448415fe-e3cf-4d07-894f-b257b64ed1b6 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 17 20:01:01 no-preload-832842 crio[568]: time="2025-12-17T20:01:01.125645695Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=58a5e366-1444-4fe3-8c5f-79eb7b5c47d6 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 20:01:01 no-preload-832842 crio[568]: time="2025-12-17T20:01:01.126676272Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=6a0d64ec-c0e8-4fd8-869f-f7980df347ec name=/runtime.v1.ImageService/ImageStatus
	Dec 17 20:01:01 no-preload-832842 crio[568]: time="2025-12-17T20:01:01.128293717Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=874de56e-6ee5-4be0-96b1-9f47e5f9b362 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 17 20:01:01 no-preload-832842 crio[568]: time="2025-12-17T20:01:01.128443277Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 20:01:01 no-preload-832842 crio[568]: time="2025-12-17T20:01:01.133444688Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 20:01:01 no-preload-832842 crio[568]: time="2025-12-17T20:01:01.133777555Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/3b2c430c59f49687533e237ba1b1610fac136f1fc84542d3996591ad1cc891bb/merged/etc/passwd: no such file or directory"
	Dec 17 20:01:01 no-preload-832842 crio[568]: time="2025-12-17T20:01:01.133884Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/3b2c430c59f49687533e237ba1b1610fac136f1fc84542d3996591ad1cc891bb/merged/etc/group: no such file or directory"
	Dec 17 20:01:01 no-preload-832842 crio[568]: time="2025-12-17T20:01:01.134277142Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 20:01:01 no-preload-832842 crio[568]: time="2025-12-17T20:01:01.167114092Z" level=info msg="Created container d71ed695baa767c1509bc38e05b709bad367861f9b3be89d656fd64d0ea54137: kube-system/storage-provisioner/storage-provisioner" id=874de56e-6ee5-4be0-96b1-9f47e5f9b362 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 17 20:01:01 no-preload-832842 crio[568]: time="2025-12-17T20:01:01.16791213Z" level=info msg="Starting container: d71ed695baa767c1509bc38e05b709bad367861f9b3be89d656fd64d0ea54137" id=76fb00f7-430c-4fe6-a67b-c3e047bff16b name=/runtime.v1.RuntimeService/StartContainer
	Dec 17 20:01:01 no-preload-832842 crio[568]: time="2025-12-17T20:01:01.170225357Z" level=info msg="Started container" PID=1778 containerID=d71ed695baa767c1509bc38e05b709bad367861f9b3be89d656fd64d0ea54137 description=kube-system/storage-provisioner/storage-provisioner id=76fb00f7-430c-4fe6-a67b-c3e047bff16b name=/runtime.v1.RuntimeService/StartContainer sandboxID=37a4519d64a6155074c56e3e7538f11d4ebe789e5a292f8d39c5395c31e6ac10
	Dec 17 20:01:09 no-preload-832842 crio[568]: time="2025-12-17T20:01:09.991979195Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=df20da0d-cefd-4225-a9c3-ea3946f6f4fc name=/runtime.v1.ImageService/ImageStatus
	Dec 17 20:01:09 no-preload-832842 crio[568]: time="2025-12-17T20:01:09.99299555Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=dce755d2-6966-47d2-807f-ff2c0a705b6d name=/runtime.v1.ImageService/ImageStatus
	Dec 17 20:01:09 no-preload-832842 crio[568]: time="2025-12-17T20:01:09.994192288Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-zjc4j/dashboard-metrics-scraper" id=99c6f88b-2e9f-4001-acee-b5b8ecf09875 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 17 20:01:09 no-preload-832842 crio[568]: time="2025-12-17T20:01:09.994362777Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 20:01:09 no-preload-832842 crio[568]: time="2025-12-17T20:01:09.99983872Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 20:01:10 no-preload-832842 crio[568]: time="2025-12-17T20:01:10.000367243Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 20:01:10 no-preload-832842 crio[568]: time="2025-12-17T20:01:10.031327228Z" level=info msg="Created container c35ae1f5685d7eb989e5e2ae71d012fc2d94fb19e3073568b71a6676af20d337: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-zjc4j/dashboard-metrics-scraper" id=99c6f88b-2e9f-4001-acee-b5b8ecf09875 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 17 20:01:10 no-preload-832842 crio[568]: time="2025-12-17T20:01:10.032053696Z" level=info msg="Starting container: c35ae1f5685d7eb989e5e2ae71d012fc2d94fb19e3073568b71a6676af20d337" id=9eb3ed70-aa91-451e-b497-4502bf6db091 name=/runtime.v1.RuntimeService/StartContainer
	Dec 17 20:01:10 no-preload-832842 crio[568]: time="2025-12-17T20:01:10.034285823Z" level=info msg="Started container" PID=1811 containerID=c35ae1f5685d7eb989e5e2ae71d012fc2d94fb19e3073568b71a6676af20d337 description=kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-zjc4j/dashboard-metrics-scraper id=9eb3ed70-aa91-451e-b497-4502bf6db091 name=/runtime.v1.RuntimeService/StartContainer sandboxID=ec15d0feb093f825afaffc6a197d0ff3ecd9a66fddff8fb31f9437971f51b5ea
	Dec 17 20:01:10 no-preload-832842 crio[568]: time="2025-12-17T20:01:10.152215775Z" level=info msg="Removing container: 55ca2ad24b8a2ee9241203fdd178b54f929582e37041dd86d79b3f677841a5ce" id=86d70372-04f7-4453-92e6-ddeeaee7c600 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 17 20:01:10 no-preload-832842 crio[568]: time="2025-12-17T20:01:10.161528432Z" level=info msg="Removed container 55ca2ad24b8a2ee9241203fdd178b54f929582e37041dd86d79b3f677841a5ce: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-zjc4j/dashboard-metrics-scraper" id=86d70372-04f7-4453-92e6-ddeeaee7c600 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	c35ae1f5685d7       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           14 seconds ago      Exited              dashboard-metrics-scraper   3                   ec15d0feb093f       dashboard-metrics-scraper-867fb5f87b-zjc4j   kubernetes-dashboard
	d71ed695baa76       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           23 seconds ago      Running             storage-provisioner         1                   37a4519d64a61       storage-provisioner                          kube-system
	55c1a97eef28c       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   44 seconds ago      Running             kubernetes-dashboard        0                   131175fcfc0fa       kubernetes-dashboard-b84665fb8-cfd69         kubernetes-dashboard
	df79f3414f094       aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139                                           54 seconds ago      Running             coredns                     0                   b4799098ac67e       coredns-7d764666f9-988jw                     kube-system
	28ed811767308       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           54 seconds ago      Running             busybox                     1                   c45848f548378       busybox                                      default
	74a2be0dba394       af0321f3a4f388cfb978464739c323ebf891a7b0b50cdfd7179e92f141dad42a                                           54 seconds ago      Running             kube-proxy                  0                   71ca062f3ed9e       kube-proxy-jc5dd                             kube-system
	574a5ed645344       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           54 seconds ago      Exited              storage-provisioner         0                   37a4519d64a61       storage-provisioner                          kube-system
	6dc1bf580a5e5       4921d7a6dffa922dd679732ba4797085c4f39e9a53bee8b6fdb1d463e8571251                                           54 seconds ago      Running             kindnet-cni                 0                   9eaa9c854caa7       kindnet-t5x5v                                kube-system
	aa0f70514b3b3       5032a56602e1b9bd8856699701b6148aa1b9901d05b61f893df3b57f84aca614                                           56 seconds ago      Running             kube-controller-manager     0                   670885867d7cc       kube-controller-manager-no-preload-832842    kube-system
	3c8014a76c7ed       0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2                                           56 seconds ago      Running             etcd                        0                   adea169e1e2a8       etcd-no-preload-832842                       kube-system
	93adc4b861b7c       73f80cdc073daa4d501207f9e6dec1fa9eea5f27e8d347b8a0c4bad8811eecdc                                           56 seconds ago      Running             kube-scheduler              0                   1fc9e8de07a77       kube-scheduler-no-preload-832842             kube-system
	fc98dcbd3e923       58865405a13bccac1d74bc3f446dddd22e6ef0d7ee8b52363c86dd31838976ce                                           56 seconds ago      Running             kube-apiserver              0                   5debfc43044b1       kube-apiserver-no-preload-832842             kube-system
	
	
	==> coredns [df79f3414f09421efcb91bbc4abcc73e07bf62fc320f79ed6c541180aa4945ab] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 66f0a748f44f6317a6b122af3f457c9dd0ecaed8718ffbf95a69434523efd9ec4992e71f54c7edd5753646fe9af89ac2138b9c3ce14d4a0ba9d2372a55f120bb
	CoreDNS-1.13.1
	linux/amd64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:32939 - 62348 "HINFO IN 6765570193243579541.1179140327450952908. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.42761068s
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	
	
	==> describe nodes <==
	Name:               no-preload-832842
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-832842
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2e96f676eb7e96389e85fe0658a4ede4c4ba6924
	                    minikube.k8s.io/name=no-preload-832842
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_17T19_59_33_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Dec 2025 19:59:29 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-832842
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Dec 2025 20:01:20 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Dec 2025 20:00:59 +0000   Wed, 17 Dec 2025 19:59:28 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Dec 2025 20:00:59 +0000   Wed, 17 Dec 2025 19:59:28 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Dec 2025 20:00:59 +0000   Wed, 17 Dec 2025 19:59:28 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Dec 2025 20:00:59 +0000   Wed, 17 Dec 2025 19:59:50 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    no-preload-832842
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 99cc213c06a11cdf07b2a4d26942818a
	  System UUID:                e81b3478-a278-4914-8840-ea9b4f5123a7
	  Boot ID:                    832664c8-407a-4bff-a432-3bbc3f20421e
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.35.0-rc.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         91s
	  kube-system                 coredns-7d764666f9-988jw                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     107s
	  kube-system                 etcd-no-preload-832842                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         112s
	  kube-system                 kindnet-t5x5v                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      107s
	  kube-system                 kube-apiserver-no-preload-832842              250m (3%)     0 (0%)      0 (0%)           0 (0%)         114s
	  kube-system                 kube-controller-manager-no-preload-832842     200m (2%)     0 (0%)      0 (0%)           0 (0%)         112s
	  kube-system                 kube-proxy-jc5dd                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         107s
	  kube-system                 kube-scheduler-no-preload-832842              100m (1%)     0 (0%)      0 (0%)           0 (0%)         112s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         106s
	  kubernetes-dashboard        dashboard-metrics-scraper-867fb5f87b-zjc4j    0 (0%)        0 (0%)      0 (0%)           0 (0%)         52s
	  kubernetes-dashboard        kubernetes-dashboard-b84665fb8-cfd69          0 (0%)        0 (0%)      0 (0%)           0 (0%)         52s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  108s  node-controller  Node no-preload-832842 event: Registered Node no-preload-832842 in Controller
	  Normal  RegisteredNode  52s   node-controller  Node no-preload-832842 event: Registered Node no-preload-832842 in Controller
	
	
	==> dmesg <==
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 02 bf cf fd 8a f3 08 06
	[  +0.000372] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 46 d7 50 f9 50 96 08 06
	[Dec17 19:26] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000011] ll header: 00000000: 12 b8 6e 1b fb 93 de a2 46 23 bd 1e 08 00
	[  +1.015318] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 12 b8 6e 1b fb 93 de a2 46 23 bd 1e 08 00
	[  +1.023837] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 12 b8 6e 1b fb 93 de a2 46 23 bd 1e 08 00
	[  +1.023872] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 12 b8 6e 1b fb 93 de a2 46 23 bd 1e 08 00
	[  +1.023881] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 12 b8 6e 1b fb 93 de a2 46 23 bd 1e 08 00
	[  +1.023899] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 12 b8 6e 1b fb 93 de a2 46 23 bd 1e 08 00
	[  +2.047807] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: 12 b8 6e 1b fb 93 de a2 46 23 bd 1e 08 00
	[  +4.031540] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000022] ll header: 00000000: 12 b8 6e 1b fb 93 de a2 46 23 bd 1e 08 00
	[  +8.319118] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: 12 b8 6e 1b fb 93 de a2 46 23 bd 1e 08 00
	[ +16.382218] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 12 b8 6e 1b fb 93 de a2 46 23 bd 1e 08 00
	[Dec17 19:27] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 12 b8 6e 1b fb 93 de a2 46 23 bd 1e 08 00
	
	
	==> etcd [3c8014a76c7ede91c3cd5009249d11a432295b5b5abd84d90df0cea58173d3dd] <==
	{"level":"info","ts":"2025-12-17T20:00:27.555006Z","caller":"fileutil/purge.go:49","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-12-17T20:00:27.555311Z","caller":"fileutil/purge.go:49","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-12-17T20:00:27.555173Z","caller":"embed/etcd.go:292","msg":"now serving peer/client/metrics","local-member-id":"f23060b075c4c089","initial-advertise-peer-urls":["https://192.168.103.2:2380"],"listen-peer-urls":["https://192.168.103.2:2380"],"advertise-client-urls":["https://192.168.103.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.103.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-12-17T20:00:27.555215Z","caller":"embed/etcd.go:890","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-12-17T20:00:27.555524Z","caller":"etcdserver/server.go:483","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"f23060b075c4c089","forward-ticks":9,"forward-duration":"900ms","election-ticks":10,"election-timeout":"1s"}
	{"level":"info","ts":"2025-12-17T20:00:27.555668Z","caller":"membership/cluster.go:433","msg":"ignore already added member","cluster-id":"3336683c081d149d","local-member-id":"f23060b075c4c089","added-peer-id":"f23060b075c4c089","added-peer-peer-urls":["https://192.168.103.2:2380"],"added-peer-is-learner":false}
	{"level":"info","ts":"2025-12-17T20:00:27.555870Z","caller":"membership/cluster.go:674","msg":"updated cluster version","cluster-id":"3336683c081d149d","local-member-id":"f23060b075c4c089","from":"3.6","to":"3.6"}
	{"level":"info","ts":"2025-12-17T20:00:27.944308Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"f23060b075c4c089 is starting a new election at term 2"}
	{"level":"info","ts":"2025-12-17T20:00:27.944356Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"f23060b075c4c089 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-12-17T20:00:27.944429Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"f23060b075c4c089 received MsgPreVoteResp from f23060b075c4c089 at term 2"}
	{"level":"info","ts":"2025-12-17T20:00:27.944442Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"f23060b075c4c089 has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-17T20:00:27.944457Z","logger":"raft","caller":"v3@v3.6.0/raft.go:912","msg":"f23060b075c4c089 became candidate at term 3"}
	{"level":"info","ts":"2025-12-17T20:00:27.945223Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"f23060b075c4c089 received MsgVoteResp from f23060b075c4c089 at term 3"}
	{"level":"info","ts":"2025-12-17T20:00:27.945252Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"f23060b075c4c089 has received 1 MsgVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-17T20:00:27.945283Z","logger":"raft","caller":"v3@v3.6.0/raft.go:970","msg":"f23060b075c4c089 became leader at term 3"}
	{"level":"info","ts":"2025-12-17T20:00:27.945294Z","logger":"raft","caller":"v3@v3.6.0/node.go:370","msg":"raft.node: f23060b075c4c089 elected leader f23060b075c4c089 at term 3"}
	{"level":"info","ts":"2025-12-17T20:00:27.947737Z","caller":"etcdserver/server.go:1820","msg":"published local member to cluster through raft","local-member-id":"f23060b075c4c089","local-member-attributes":"{Name:no-preload-832842 ClientURLs:[https://192.168.103.2:2379]}","cluster-id":"3336683c081d149d","publish-timeout":"7s"}
	{"level":"info","ts":"2025-12-17T20:00:27.947779Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-17T20:00:27.947851Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-17T20:00:27.948203Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-12-17T20:00:27.948330Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-12-17T20:00:27.949267Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-17T20:00:27.949317Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-17T20:00:27.954124Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.103.2:2379"}
	{"level":"info","ts":"2025-12-17T20:00:27.954305Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 20:01:24 up  1:43,  0 user,  load average: 3.61, 3.26, 2.34
	Linux no-preload-832842 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [6dc1bf580a5e5d88fdf2f6bbe5d1905fb56db30030d094660f124897fd457658] <==
	I1217 20:00:30.539410       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1217 20:00:30.594507       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1217 20:00:30.594693       1 main.go:148] setting mtu 1500 for CNI 
	I1217 20:00:30.594721       1 main.go:178] kindnetd IP family: "ipv4"
	I1217 20:00:30.594750       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-17T20:00:30Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1217 20:00:30.794979       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1217 20:00:30.795168       1 controller.go:381] "Waiting for informer caches to sync"
	I1217 20:00:30.795187       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1217 20:00:30.937403       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1217 20:00:31.195591       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1217 20:00:31.195625       1 metrics.go:72] Registering metrics
	I1217 20:00:31.195692       1 controller.go:711] "Syncing nftables rules"
	I1217 20:00:40.795164       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1217 20:00:40.795228       1 main.go:301] handling current node
	I1217 20:00:50.795193       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1217 20:00:50.795240       1 main.go:301] handling current node
	I1217 20:01:00.795186       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1217 20:01:00.795220       1 main.go:301] handling current node
	I1217 20:01:10.795302       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1217 20:01:10.795365       1 main.go:301] handling current node
	I1217 20:01:20.797912       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1217 20:01:20.797952       1 main.go:301] handling current node
	
	
	==> kube-apiserver [fc98dcbd3e923feb9befb5e08f3923050cddcdcd6ec0dde8a4a828548f21afbc] <==
	I1217 20:00:29.028155       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1217 20:00:29.028251       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1217 20:00:29.028255       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1217 20:00:29.028238       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1217 20:00:29.031354       1 shared_informer.go:377] "Caches are synced"
	I1217 20:00:29.031412       1 shared_informer.go:377] "Caches are synced"
	I1217 20:00:29.031447       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1217 20:00:29.031617       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1217 20:00:29.033942       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	E1217 20:00:29.040686       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1217 20:00:29.076871       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1217 20:00:29.082167       1 shared_informer.go:377] "Caches are synced"
	I1217 20:00:29.082198       1 policy_source.go:248] refreshing policies
	I1217 20:00:29.097621       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1217 20:00:29.315764       1 controller.go:667] quota admission added evaluator for: namespaces
	I1217 20:00:29.344221       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1217 20:00:29.367629       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1217 20:00:29.375216       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1217 20:00:29.382717       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1217 20:00:29.421615       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.98.64.7"}
	I1217 20:00:29.435712       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.110.111.16"}
	I1217 20:00:29.931340       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I1217 20:00:32.626039       1 controller.go:667] quota admission added evaluator for: endpoints
	I1217 20:00:32.677494       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1217 20:00:32.826471       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [aa0f70514b3b3987679fa08562d6a29d0cde6f41668ff6920603c0af90405bbe] <==
	I1217 20:00:32.192095       1 shared_informer.go:377] "Caches are synced"
	I1217 20:00:32.192072       1 node_lifecycle_controller.go:1080] "Controller detected that zone is now in new state" zone="" newState="Normal"
	I1217 20:00:32.192195       1 shared_informer.go:377] "Caches are synced"
	I1217 20:00:32.192619       1 shared_informer.go:377] "Caches are synced"
	I1217 20:00:32.192721       1 shared_informer.go:377] "Caches are synced"
	I1217 20:00:32.192909       1 shared_informer.go:377] "Caches are synced"
	I1217 20:00:32.193363       1 shared_informer.go:377] "Caches are synced"
	I1217 20:00:32.194346       1 shared_informer.go:377] "Caches are synced"
	I1217 20:00:32.198859       1 shared_informer.go:377] "Caches are synced"
	I1217 20:00:32.202967       1 shared_informer.go:377] "Caches are synced"
	I1217 20:00:32.203030       1 shared_informer.go:377] "Caches are synced"
	I1217 20:00:32.203057       1 shared_informer.go:377] "Caches are synced"
	I1217 20:00:32.206489       1 shared_informer.go:377] "Caches are synced"
	I1217 20:00:32.211378       1 shared_informer.go:377] "Caches are synced"
	I1217 20:00:32.211401       1 shared_informer.go:377] "Caches are synced"
	I1217 20:00:32.211469       1 shared_informer.go:377] "Caches are synced"
	I1217 20:00:32.212607       1 shared_informer.go:377] "Caches are synced"
	I1217 20:00:32.212627       1 shared_informer.go:377] "Caches are synced"
	I1217 20:00:32.212654       1 shared_informer.go:377] "Caches are synced"
	I1217 20:00:32.215772       1 shared_informer.go:377] "Caches are synced"
	I1217 20:00:32.227342       1 shared_informer.go:377] "Caches are synced"
	I1217 20:00:32.289938       1 shared_informer.go:377] "Caches are synced"
	I1217 20:00:32.294485       1 shared_informer.go:377] "Caches are synced"
	I1217 20:00:32.294529       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1217 20:00:32.294537       1 garbagecollector.go:169] "Proceeding to collect garbage"
	
	
	==> kube-proxy [74a2be0dba394331147af1f7139cc8715764693116a735ed916bd4c8ee2fd3bf] <==
	I1217 20:00:30.394049       1 server_linux.go:53] "Using iptables proxy"
	I1217 20:00:30.471197       1 shared_informer.go:370] "Waiting for caches to sync"
	I1217 20:00:30.571711       1 shared_informer.go:377] "Caches are synced"
	I1217 20:00:30.571753       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.103.2"]
	E1217 20:00:30.571964       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1217 20:00:30.593382       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1217 20:00:30.593446       1 server_linux.go:136] "Using iptables Proxier"
	I1217 20:00:30.599896       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1217 20:00:30.600485       1 server.go:529] "Version info" version="v1.35.0-rc.1"
	I1217 20:00:30.600564       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1217 20:00:30.602134       1 config.go:200] "Starting service config controller"
	I1217 20:00:30.602164       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1217 20:00:30.602217       1 config.go:403] "Starting serviceCIDR config controller"
	I1217 20:00:30.602238       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1217 20:00:30.602282       1 config.go:106] "Starting endpoint slice config controller"
	I1217 20:00:30.602315       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1217 20:00:30.602325       1 config.go:309] "Starting node config controller"
	I1217 20:00:30.602362       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1217 20:00:30.602372       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1217 20:00:30.702278       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1217 20:00:30.702341       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1217 20:00:30.702449       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [93adc4b861b7c2cb084b258ba073a7308743dab281018c38f60ca99fa8a8c8eb] <==
	I1217 20:00:27.674048       1 serving.go:386] Generated self-signed cert in-memory
	W1217 20:00:28.973499       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1217 20:00:28.973645       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1217 20:00:28.973668       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1217 20:00:28.973696       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1217 20:00:29.017329       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0-rc.1"
	I1217 20:00:29.017377       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1217 20:00:29.020516       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1217 20:00:29.020564       1 shared_informer.go:370] "Waiting for caches to sync"
	I1217 20:00:29.020696       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1217 20:00:29.020724       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1217 20:00:29.122491       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Dec 17 20:00:44 no-preload-832842 kubelet[723]: E1217 20:00:44.074404     723 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-no-preload-832842" containerName="kube-apiserver"
	Dec 17 20:00:48 no-preload-832842 kubelet[723]: E1217 20:00:48.991175     723 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-zjc4j" containerName="dashboard-metrics-scraper"
	Dec 17 20:00:48 no-preload-832842 kubelet[723]: I1217 20:00:48.991230     723 scope.go:122] "RemoveContainer" containerID="6d2c7a993ad05ebd47e395b0a8846c2cd798e6411ba252f85d1948f3688548f5"
	Dec 17 20:00:49 no-preload-832842 kubelet[723]: I1217 20:00:49.089951     723 scope.go:122] "RemoveContainer" containerID="6d2c7a993ad05ebd47e395b0a8846c2cd798e6411ba252f85d1948f3688548f5"
	Dec 17 20:00:49 no-preload-832842 kubelet[723]: E1217 20:00:49.090263     723 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-zjc4j" containerName="dashboard-metrics-scraper"
	Dec 17 20:00:49 no-preload-832842 kubelet[723]: I1217 20:00:49.090304     723 scope.go:122] "RemoveContainer" containerID="55ca2ad24b8a2ee9241203fdd178b54f929582e37041dd86d79b3f677841a5ce"
	Dec 17 20:00:49 no-preload-832842 kubelet[723]: E1217 20:00:49.090486     723 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-zjc4j_kubernetes-dashboard(da73ea11-bc61-43cc-9a72-f9172ec75207)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-zjc4j" podUID="da73ea11-bc61-43cc-9a72-f9172ec75207"
	Dec 17 20:00:51 no-preload-832842 kubelet[723]: E1217 20:00:51.830902     723 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-zjc4j" containerName="dashboard-metrics-scraper"
	Dec 17 20:00:51 no-preload-832842 kubelet[723]: I1217 20:00:51.830946     723 scope.go:122] "RemoveContainer" containerID="55ca2ad24b8a2ee9241203fdd178b54f929582e37041dd86d79b3f677841a5ce"
	Dec 17 20:00:51 no-preload-832842 kubelet[723]: E1217 20:00:51.831146     723 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-zjc4j_kubernetes-dashboard(da73ea11-bc61-43cc-9a72-f9172ec75207)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-zjc4j" podUID="da73ea11-bc61-43cc-9a72-f9172ec75207"
	Dec 17 20:01:01 no-preload-832842 kubelet[723]: I1217 20:01:01.125188     723 scope.go:122] "RemoveContainer" containerID="574a5ed6453441e6d8a97097093213b4144a910e98bd02d4b28191ce5e459144"
	Dec 17 20:01:07 no-preload-832842 kubelet[723]: E1217 20:01:07.538402     723 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-988jw" containerName="coredns"
	Dec 17 20:01:09 no-preload-832842 kubelet[723]: E1217 20:01:09.991490     723 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-zjc4j" containerName="dashboard-metrics-scraper"
	Dec 17 20:01:09 no-preload-832842 kubelet[723]: I1217 20:01:09.991525     723 scope.go:122] "RemoveContainer" containerID="55ca2ad24b8a2ee9241203fdd178b54f929582e37041dd86d79b3f677841a5ce"
	Dec 17 20:01:10 no-preload-832842 kubelet[723]: I1217 20:01:10.150909     723 scope.go:122] "RemoveContainer" containerID="55ca2ad24b8a2ee9241203fdd178b54f929582e37041dd86d79b3f677841a5ce"
	Dec 17 20:01:10 no-preload-832842 kubelet[723]: E1217 20:01:10.151207     723 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-zjc4j" containerName="dashboard-metrics-scraper"
	Dec 17 20:01:10 no-preload-832842 kubelet[723]: I1217 20:01:10.151240     723 scope.go:122] "RemoveContainer" containerID="c35ae1f5685d7eb989e5e2ae71d012fc2d94fb19e3073568b71a6676af20d337"
	Dec 17 20:01:10 no-preload-832842 kubelet[723]: E1217 20:01:10.151431     723 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-zjc4j_kubernetes-dashboard(da73ea11-bc61-43cc-9a72-f9172ec75207)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-zjc4j" podUID="da73ea11-bc61-43cc-9a72-f9172ec75207"
	Dec 17 20:01:11 no-preload-832842 kubelet[723]: E1217 20:01:11.831828     723 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-zjc4j" containerName="dashboard-metrics-scraper"
	Dec 17 20:01:11 no-preload-832842 kubelet[723]: I1217 20:01:11.831876     723 scope.go:122] "RemoveContainer" containerID="c35ae1f5685d7eb989e5e2ae71d012fc2d94fb19e3073568b71a6676af20d337"
	Dec 17 20:01:11 no-preload-832842 kubelet[723]: E1217 20:01:11.832123     723 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-zjc4j_kubernetes-dashboard(da73ea11-bc61-43cc-9a72-f9172ec75207)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-zjc4j" podUID="da73ea11-bc61-43cc-9a72-f9172ec75207"
	Dec 17 20:01:21 no-preload-832842 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 17 20:01:21 no-preload-832842 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 17 20:01:21 no-preload-832842 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 20:01:21 no-preload-832842 systemd[1]: kubelet.service: Consumed 1.816s CPU time.
	
	
	==> kubernetes-dashboard [55c1a97eef28cd0406e0d4aef3df5a460e2bc3114b4471c21d47e187a026216d] <==
	2025/12/17 20:00:40 Using namespace: kubernetes-dashboard
	2025/12/17 20:00:40 Using in-cluster config to connect to apiserver
	2025/12/17 20:00:40 Using secret token for csrf signing
	2025/12/17 20:00:40 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/12/17 20:00:40 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/12/17 20:00:40 Successful initial request to the apiserver, version: v1.35.0-rc.1
	2025/12/17 20:00:40 Generating JWE encryption key
	2025/12/17 20:00:40 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/12/17 20:00:40 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/12/17 20:00:40 Initializing JWE encryption key from synchronized object
	2025/12/17 20:00:40 Creating in-cluster Sidecar client
	2025/12/17 20:00:40 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/17 20:00:40 Serving insecurely on HTTP port: 9090
	2025/12/17 20:01:10 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/17 20:00:40 Starting overwatch
	
	
	==> storage-provisioner [574a5ed6453441e6d8a97097093213b4144a910e98bd02d4b28191ce5e459144] <==
	I1217 20:00:30.354687       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1217 20:01:00.359543       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [d71ed695baa767c1509bc38e05b709bad367861f9b3be89d656fd64d0ea54137] <==
	I1217 20:01:01.184158       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1217 20:01:01.192401       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1217 20:01:01.192453       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1217 20:01:01.195518       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 20:01:04.650919       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 20:01:08.912149       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 20:01:12.511335       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 20:01:15.565355       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 20:01:18.587726       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 20:01:18.592870       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1217 20:01:18.593070       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1217 20:01:18.593175       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"dc57e620-3a27-4c8c-a77e-e1c5cd6ef8f6", APIVersion:"v1", ResourceVersion:"680", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-832842_850ea18b-545c-49a8-9739-189a6fa3e3bd became leader
	I1217 20:01:18.593312       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-832842_850ea18b-545c-49a8-9739-189a6fa3e3bd!
	W1217 20:01:18.595843       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 20:01:18.599149       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1217 20:01:18.693862       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-832842_850ea18b-545c-49a8-9739-189a6fa3e3bd!
	W1217 20:01:20.602886       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 20:01:20.607496       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 20:01:22.611293       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 20:01:22.615407       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 20:01:24.618358       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 20:01:24.622877       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-832842 -n no-preload-832842
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-832842 -n no-preload-832842: exit status 2 (353.621505ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context no-preload-832842 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/no-preload/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect no-preload-832842
helpers_test.go:244: (dbg) docker inspect no-preload-832842:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "dc205de21d84136a9158f48e22680e3a6dbeb7058d8f7cb8a1ec42b2ab7078c4",
	        "Created": "2025-12-17T19:59:10.833809324Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 624671,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-17T20:00:20.645613398Z",
	            "FinishedAt": "2025-12-17T20:00:19.733406734Z"
	        },
	        "Image": "sha256:e3abeb065413b7566dd42e98e204ab3ad174790743f1f5cd427036c11b49d7f1",
	        "ResolvConfPath": "/var/lib/docker/containers/dc205de21d84136a9158f48e22680e3a6dbeb7058d8f7cb8a1ec42b2ab7078c4/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/dc205de21d84136a9158f48e22680e3a6dbeb7058d8f7cb8a1ec42b2ab7078c4/hostname",
	        "HostsPath": "/var/lib/docker/containers/dc205de21d84136a9158f48e22680e3a6dbeb7058d8f7cb8a1ec42b2ab7078c4/hosts",
	        "LogPath": "/var/lib/docker/containers/dc205de21d84136a9158f48e22680e3a6dbeb7058d8f7cb8a1ec42b2ab7078c4/dc205de21d84136a9158f48e22680e3a6dbeb7058d8f7cb8a1ec42b2ab7078c4-json.log",
	        "Name": "/no-preload-832842",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-832842:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "no-preload-832842",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "dc205de21d84136a9158f48e22680e3a6dbeb7058d8f7cb8a1ec42b2ab7078c4",
	                "LowerDir": "/var/lib/docker/overlay2/ebb0d0f911a75643e43d20c434d6ce8701dfed1b02452ca7b47f96286ae91c9a-init/diff:/var/lib/docker/overlay2/29727d664a8119dcd8d22d923cfdfa7d86f99088879bf2a113d907b51116eb38/diff",
	                "MergedDir": "/var/lib/docker/overlay2/ebb0d0f911a75643e43d20c434d6ce8701dfed1b02452ca7b47f96286ae91c9a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/ebb0d0f911a75643e43d20c434d6ce8701dfed1b02452ca7b47f96286ae91c9a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/ebb0d0f911a75643e43d20c434d6ce8701dfed1b02452ca7b47f96286ae91c9a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-832842",
	                "Source": "/var/lib/docker/volumes/no-preload-832842/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-832842",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-832842",
	                "name.minikube.sigs.k8s.io": "no-preload-832842",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "56b9b7028a6e31debffefaa714d520e79fd4d737efec11c3d53f4106876c3114",
	            "SandboxKey": "/var/run/docker/netns/56b9b7028a6e",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33443"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33444"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33447"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33445"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33446"
	                    }
	                ]
	            },
	            "Networks": {
	                "no-preload-832842": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "a19db78cafed3da0943e15828af72c0aafbad853d47090363f5479ad475afe12",
	                    "EndpointID": "0d958ef65dfadfc20a86edb80bd98067acf45048f3df96dbf481fce6467d7328",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "MacAddress": "3a:9d:68:e2:a8:87",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-832842",
	                        "dc205de21d84"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-832842 -n no-preload-832842
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-832842 -n no-preload-832842: exit status 2 (364.35126ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-832842 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p no-preload-832842 logs -n 25: (1.268584626s)
helpers_test.go:261: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ stop    │ -p NoKubernetes-327438                                                                                                                                                                                                                        │ NoKubernetes-327438          │ jenkins │ v1.37.0 │ 17 Dec 25 19:58 UTC │ 17 Dec 25 19:58 UTC │
	│ start   │ -p NoKubernetes-327438 --driver=docker  --container-runtime=crio                                                                                                                                                                              │ NoKubernetes-327438          │ jenkins │ v1.37.0 │ 17 Dec 25 19:58 UTC │ 17 Dec 25 19:59 UTC │
	│ ssh     │ cert-options-997440 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-997440          │ jenkins │ v1.37.0 │ 17 Dec 25 19:59 UTC │ 17 Dec 25 19:59 UTC │
	│ ssh     │ -p cert-options-997440 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-997440          │ jenkins │ v1.37.0 │ 17 Dec 25 19:59 UTC │ 17 Dec 25 19:59 UTC │
	│ delete  │ -p cert-options-997440                                                                                                                                                                                                                        │ cert-options-997440          │ jenkins │ v1.37.0 │ 17 Dec 25 19:59 UTC │ 17 Dec 25 19:59 UTC │
	│ ssh     │ -p NoKubernetes-327438 sudo systemctl is-active --quiet service kubelet                                                                                                                                                                       │ NoKubernetes-327438          │ jenkins │ v1.37.0 │ 17 Dec 25 19:59 UTC │                     │
	│ start   │ -p old-k8s-version-894575 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-894575       │ jenkins │ v1.37.0 │ 17 Dec 25 19:59 UTC │ 17 Dec 25 19:59 UTC │
	│ delete  │ -p NoKubernetes-327438                                                                                                                                                                                                                        │ NoKubernetes-327438          │ jenkins │ v1.37.0 │ 17 Dec 25 19:59 UTC │ 17 Dec 25 19:59 UTC │
	│ delete  │ -p disable-driver-mounts-890254                                                                                                                                                                                                               │ disable-driver-mounts-890254 │ jenkins │ v1.37.0 │ 17 Dec 25 19:59 UTC │ 17 Dec 25 19:59 UTC │
	│ start   │ -p no-preload-832842 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1                                                                                  │ no-preload-832842            │ jenkins │ v1.37.0 │ 17 Dec 25 19:59 UTC │ 17 Dec 25 19:59 UTC │
	│ addons  │ enable metrics-server -p no-preload-832842 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-832842            │ jenkins │ v1.37.0 │ 17 Dec 25 20:00 UTC │                     │
	│ stop    │ -p no-preload-832842 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-832842            │ jenkins │ v1.37.0 │ 17 Dec 25 20:00 UTC │ 17 Dec 25 20:00 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-894575 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-894575       │ jenkins │ v1.37.0 │ 17 Dec 25 20:00 UTC │                     │
	│ stop    │ -p old-k8s-version-894575 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-894575       │ jenkins │ v1.37.0 │ 17 Dec 25 20:00 UTC │ 17 Dec 25 20:00 UTC │
	│ addons  │ enable dashboard -p no-preload-832842 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-832842            │ jenkins │ v1.37.0 │ 17 Dec 25 20:00 UTC │ 17 Dec 25 20:00 UTC │
	│ start   │ -p no-preload-832842 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1                                                                                  │ no-preload-832842            │ jenkins │ v1.37.0 │ 17 Dec 25 20:00 UTC │ 17 Dec 25 20:01 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-894575 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-894575       │ jenkins │ v1.37.0 │ 17 Dec 25 20:00 UTC │ 17 Dec 25 20:00 UTC │
	│ start   │ -p old-k8s-version-894575 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-894575       │ jenkins │ v1.37.0 │ 17 Dec 25 20:00 UTC │ 17 Dec 25 20:01 UTC │
	│ start   │ -p cert-expiration-059470 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-059470       │ jenkins │ v1.37.0 │ 17 Dec 25 20:00 UTC │ 17 Dec 25 20:00 UTC │
	│ delete  │ -p cert-expiration-059470                                                                                                                                                                                                                     │ cert-expiration-059470       │ jenkins │ v1.37.0 │ 17 Dec 25 20:00 UTC │ 17 Dec 25 20:00 UTC │
	│ start   │ -p default-k8s-diff-port-759234 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3                                                                      │ default-k8s-diff-port-759234 │ jenkins │ v1.37.0 │ 17 Dec 25 20:00 UTC │                     │
	│ image   │ no-preload-832842 image list --format=json                                                                                                                                                                                                    │ no-preload-832842            │ jenkins │ v1.37.0 │ 17 Dec 25 20:01 UTC │ 17 Dec 25 20:01 UTC │
	│ pause   │ -p no-preload-832842 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-832842            │ jenkins │ v1.37.0 │ 17 Dec 25 20:01 UTC │                     │
	│ image   │ old-k8s-version-894575 image list --format=json                                                                                                                                                                                               │ old-k8s-version-894575       │ jenkins │ v1.37.0 │ 17 Dec 25 20:01 UTC │ 17 Dec 25 20:01 UTC │
	│ pause   │ -p old-k8s-version-894575 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-894575       │ jenkins │ v1.37.0 │ 17 Dec 25 20:01 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/17 20:00:42
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1217 20:00:42.430475  631473 out.go:360] Setting OutFile to fd 1 ...
	I1217 20:00:42.430717  631473 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 20:00:42.430725  631473 out.go:374] Setting ErrFile to fd 2...
	I1217 20:00:42.430734  631473 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 20:00:42.430932  631473 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22186-372245/.minikube/bin
	I1217 20:00:42.431484  631473 out.go:368] Setting JSON to false
	I1217 20:00:42.432651  631473 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":6193,"bootTime":1765995449,"procs":333,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1217 20:00:42.432716  631473 start.go:143] virtualization: kvm guest
	I1217 20:00:42.434554  631473 out.go:179] * [default-k8s-diff-port-759234] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1217 20:00:42.436272  631473 out.go:179]   - MINIKUBE_LOCATION=22186
	I1217 20:00:42.436339  631473 notify.go:221] Checking for updates...
	I1217 20:00:42.438673  631473 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 20:00:42.439791  631473 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22186-372245/kubeconfig
	I1217 20:00:42.444253  631473 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22186-372245/.minikube
	I1217 20:00:42.445569  631473 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1217 20:00:42.446765  631473 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 20:00:42.448395  631473 config.go:182] Loaded profile config "kubernetes-upgrade-322567": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1217 20:00:42.448504  631473 config.go:182] Loaded profile config "no-preload-832842": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1217 20:00:42.448574  631473 config.go:182] Loaded profile config "old-k8s-version-894575": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1217 20:00:42.448676  631473 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 20:00:42.473152  631473 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1217 20:00:42.473303  631473 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 20:00:42.530715  631473 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:76 SystemTime:2025-12-17 20:00:42.520326347 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1217 20:00:42.530839  631473 docker.go:319] overlay module found
	I1217 20:00:42.533607  631473 out.go:179] * Using the docker driver based on user configuration
	I1217 20:00:42.534900  631473 start.go:309] selected driver: docker
	I1217 20:00:42.534931  631473 start.go:927] validating driver "docker" against <nil>
	I1217 20:00:42.534945  631473 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 20:00:42.535594  631473 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 20:00:42.593983  631473 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:76 SystemTime:2025-12-17 20:00:42.584279589 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1217 20:00:42.594185  631473 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1217 20:00:42.594402  631473 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1217 20:00:42.596050  631473 out.go:179] * Using Docker driver with root privileges
	I1217 20:00:42.597217  631473 cni.go:84] Creating CNI manager for ""
	I1217 20:00:42.597290  631473 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1217 20:00:42.597303  631473 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1217 20:00:42.597383  631473 start.go:353] cluster config:
	{Name:default-k8s-diff-port-759234 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:default-k8s-diff-port-759234 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SS
HAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 20:00:42.599022  631473 out.go:179] * Starting "default-k8s-diff-port-759234" primary control-plane node in "default-k8s-diff-port-759234" cluster
	I1217 20:00:42.600540  631473 cache.go:134] Beginning downloading kic base image for docker with crio
	I1217 20:00:42.601819  631473 out.go:179] * Pulling base image v0.0.48-1765966054-22186 ...
	I1217 20:00:42.603027  631473 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1217 20:00:42.603089  631473 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22186-372245/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4
	I1217 20:00:42.603104  631473 cache.go:65] Caching tarball of preloaded images
	I1217 20:00:42.603158  631473 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 in local docker daemon
	I1217 20:00:42.603241  631473 preload.go:238] Found /home/jenkins/minikube-integration/22186-372245/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1217 20:00:42.603255  631473 cache.go:68] Finished verifying existence of preloaded tar for v1.34.3 on crio
	I1217 20:00:42.603409  631473 profile.go:143] Saving config to /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/default-k8s-diff-port-759234/config.json ...
	I1217 20:00:42.603441  631473 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/default-k8s-diff-port-759234/config.json: {Name:mka62982d045e5cb058ac77025f345457b6a6373 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 20:00:42.624544  631473 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 in local docker daemon, skipping pull
	I1217 20:00:42.624564  631473 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 exists in daemon, skipping load
	I1217 20:00:42.624587  631473 cache.go:243] Successfully downloaded all kic artifacts
	I1217 20:00:42.624618  631473 start.go:360] acquireMachinesLock for default-k8s-diff-port-759234: {Name:mk173016aaa355dafae1bd5727aae1037817b426 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 20:00:42.624714  631473 start.go:364] duration metric: took 77.83µs to acquireMachinesLock for "default-k8s-diff-port-759234"
	I1217 20:00:42.624738  631473 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-759234 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:default-k8s-diff-port-759234 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:
false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1217 20:00:42.624812  631473 start.go:125] createHost starting for "" (driver="docker")
	W1217 20:00:39.572913  625400 pod_ready.go:104] pod "coredns-5dd5756b68-gbhs5" is not "Ready", error: <nil>
	W1217 20:00:42.072117  625400 pod_ready.go:104] pod "coredns-5dd5756b68-gbhs5" is not "Ready", error: <nil>
	W1217 20:00:44.072432  625400 pod_ready.go:104] pod "coredns-5dd5756b68-gbhs5" is not "Ready", error: <nil>
	W1217 20:00:42.104752  624471 pod_ready.go:104] pod "coredns-7d764666f9-988jw" is not "Ready", error: <nil>
	W1217 20:00:44.105460  624471 pod_ready.go:104] pod "coredns-7d764666f9-988jw" is not "Ready", error: <nil>
	I1217 20:00:44.011034  596882 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1217 20:00:44.011594  596882 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 20:00:44.011658  596882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 20:00:44.011708  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 20:00:44.044351  596882 cri.go:89] found id: "6822d1aff73905867cd00c8bd3d996a8d98a37c238f53bab351d576f0d6b34fc"
	I1217 20:00:44.044381  596882 cri.go:89] found id: ""
	I1217 20:00:44.044394  596882 logs.go:282] 1 containers: [6822d1aff73905867cd00c8bd3d996a8d98a37c238f53bab351d576f0d6b34fc]
	I1217 20:00:44.044463  596882 ssh_runner.go:195] Run: which crictl
	I1217 20:00:44.049338  596882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 20:00:44.049428  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 20:00:44.080283  596882 cri.go:89] found id: ""
	I1217 20:00:44.080314  596882 logs.go:282] 0 containers: []
	W1217 20:00:44.080326  596882 logs.go:284] No container was found matching "etcd"
	I1217 20:00:44.080337  596882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 20:00:44.080404  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 20:00:44.113789  596882 cri.go:89] found id: ""
	I1217 20:00:44.113818  596882 logs.go:282] 0 containers: []
	W1217 20:00:44.113829  596882 logs.go:284] No container was found matching "coredns"
	I1217 20:00:44.113835  596882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 20:00:44.113889  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 20:00:44.146485  596882 cri.go:89] found id: "26afbca819064c614a7c269e4fbe3f73beb12920c9989c7a9adca8a87b8aee29"
	I1217 20:00:44.146516  596882 cri.go:89] found id: ""
	I1217 20:00:44.146529  596882 logs.go:282] 1 containers: [26afbca819064c614a7c269e4fbe3f73beb12920c9989c7a9adca8a87b8aee29]
	I1217 20:00:44.146598  596882 ssh_runner.go:195] Run: which crictl
	I1217 20:00:44.150860  596882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 20:00:44.150933  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 20:00:44.180612  596882 cri.go:89] found id: ""
	I1217 20:00:44.180648  596882 logs.go:282] 0 containers: []
	W1217 20:00:44.180661  596882 logs.go:284] No container was found matching "kube-proxy"
	I1217 20:00:44.180669  596882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 20:00:44.180733  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 20:00:44.215315  596882 cri.go:89] found id: "deb0ef3d09cc535bcd10a8ecc98a8afc0243fdcaf4256b36cc91b5d3e2c3810c"
	I1217 20:00:44.215341  596882 cri.go:89] found id: ""
	I1217 20:00:44.215351  596882 logs.go:282] 1 containers: [deb0ef3d09cc535bcd10a8ecc98a8afc0243fdcaf4256b36cc91b5d3e2c3810c]
	I1217 20:00:44.215410  596882 ssh_runner.go:195] Run: which crictl
	I1217 20:00:44.219707  596882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 20:00:44.219792  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 20:00:44.250358  596882 cri.go:89] found id: ""
	I1217 20:00:44.250390  596882 logs.go:282] 0 containers: []
	W1217 20:00:44.250402  596882 logs.go:284] No container was found matching "kindnet"
	I1217 20:00:44.250410  596882 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1217 20:00:44.250480  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 20:00:44.279599  596882 cri.go:89] found id: ""
	I1217 20:00:44.279629  596882 logs.go:282] 0 containers: []
	W1217 20:00:44.279639  596882 logs.go:284] No container was found matching "storage-provisioner"
	I1217 20:00:44.279654  596882 logs.go:123] Gathering logs for kubelet ...
	I1217 20:00:44.279673  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 20:00:44.366299  596882 logs.go:123] Gathering logs for dmesg ...
	I1217 20:00:44.366333  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 20:00:44.383253  596882 logs.go:123] Gathering logs for describe nodes ...
	I1217 20:00:44.383288  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 20:00:44.442881  596882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 20:00:44.442906  596882 logs.go:123] Gathering logs for kube-apiserver [6822d1aff73905867cd00c8bd3d996a8d98a37c238f53bab351d576f0d6b34fc] ...
	I1217 20:00:44.442929  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6822d1aff73905867cd00c8bd3d996a8d98a37c238f53bab351d576f0d6b34fc"
	I1217 20:00:44.483060  596882 logs.go:123] Gathering logs for kube-scheduler [26afbca819064c614a7c269e4fbe3f73beb12920c9989c7a9adca8a87b8aee29] ...
	I1217 20:00:44.483124  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 26afbca819064c614a7c269e4fbe3f73beb12920c9989c7a9adca8a87b8aee29"
	I1217 20:00:44.514331  596882 logs.go:123] Gathering logs for kube-controller-manager [deb0ef3d09cc535bcd10a8ecc98a8afc0243fdcaf4256b36cc91b5d3e2c3810c] ...
	I1217 20:00:44.514367  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 deb0ef3d09cc535bcd10a8ecc98a8afc0243fdcaf4256b36cc91b5d3e2c3810c"
	I1217 20:00:44.542722  596882 logs.go:123] Gathering logs for CRI-O ...
	I1217 20:00:44.542760  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 20:00:44.590351  596882 logs.go:123] Gathering logs for container status ...
	I1217 20:00:44.590389  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 20:00:47.127294  596882 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1217 20:00:47.127787  596882 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 20:00:47.127853  596882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 20:00:47.127918  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 20:00:47.156370  596882 cri.go:89] found id: "6822d1aff73905867cd00c8bd3d996a8d98a37c238f53bab351d576f0d6b34fc"
	I1217 20:00:47.156396  596882 cri.go:89] found id: ""
	I1217 20:00:47.156404  596882 logs.go:282] 1 containers: [6822d1aff73905867cd00c8bd3d996a8d98a37c238f53bab351d576f0d6b34fc]
	I1217 20:00:47.156460  596882 ssh_runner.go:195] Run: which crictl
	I1217 20:00:47.160516  596882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 20:00:47.160594  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 20:00:47.195038  596882 cri.go:89] found id: ""
	I1217 20:00:47.195068  596882 logs.go:282] 0 containers: []
	W1217 20:00:47.195137  596882 logs.go:284] No container was found matching "etcd"
	I1217 20:00:47.195143  596882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 20:00:47.195196  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 20:00:47.226808  596882 cri.go:89] found id: ""
	I1217 20:00:47.226835  596882 logs.go:282] 0 containers: []
	W1217 20:00:47.226845  596882 logs.go:284] No container was found matching "coredns"
	I1217 20:00:47.226851  596882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 20:00:47.226903  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 20:00:42.626516  631473 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1217 20:00:42.626787  631473 start.go:159] libmachine.API.Create for "default-k8s-diff-port-759234" (driver="docker")
	I1217 20:00:42.626819  631473 client.go:173] LocalClient.Create starting
	I1217 20:00:42.626888  631473 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22186-372245/.minikube/certs/ca.pem
	I1217 20:00:42.626923  631473 main.go:143] libmachine: Decoding PEM data...
	I1217 20:00:42.626942  631473 main.go:143] libmachine: Parsing certificate...
	I1217 20:00:42.626999  631473 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22186-372245/.minikube/certs/cert.pem
	I1217 20:00:42.627020  631473 main.go:143] libmachine: Decoding PEM data...
	I1217 20:00:42.627031  631473 main.go:143] libmachine: Parsing certificate...
	I1217 20:00:42.627386  631473 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-759234 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1217 20:00:42.645356  631473 cli_runner.go:211] docker network inspect default-k8s-diff-port-759234 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1217 20:00:42.645431  631473 network_create.go:284] running [docker network inspect default-k8s-diff-port-759234] to gather additional debugging logs...
	I1217 20:00:42.645452  631473 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-759234
	W1217 20:00:42.662433  631473 cli_runner.go:211] docker network inspect default-k8s-diff-port-759234 returned with exit code 1
	I1217 20:00:42.662463  631473 network_create.go:287] error running [docker network inspect default-k8s-diff-port-759234]: docker network inspect default-k8s-diff-port-759234: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network default-k8s-diff-port-759234 not found
	I1217 20:00:42.662486  631473 network_create.go:289] output of [docker network inspect default-k8s-diff-port-759234]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network default-k8s-diff-port-759234 not found
	
	** /stderr **
	I1217 20:00:42.662577  631473 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1217 20:00:42.680765  631473 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-f64340259533 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:f6:0a:32:70:0d:35} reservation:<nil>}
	I1217 20:00:42.681557  631473 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-67abe6566c60 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:42:82:43:08:7c:e3} reservation:<nil>}
	I1217 20:00:42.682052  631473 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-f76d03f2ebfd IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:8e:bb:9b:fb:af:46} reservation:<nil>}
	I1217 20:00:42.682584  631473 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-4c731e2a052d IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:4e:e6:a7:52:2c:69} reservation:<nil>}
	I1217 20:00:42.683304  631473 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-f0ce1019d985 IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:26:5a:f7:51:9a:55} reservation:<nil>}
	I1217 20:00:42.684136  631473 network.go:206] using free private subnet 192.168.94.0/24: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001f4b420}
	I1217 20:00:42.684173  631473 network_create.go:124] attempt to create docker network default-k8s-diff-port-759234 192.168.94.0/24 with gateway 192.168.94.1 and MTU of 1500 ...
	I1217 20:00:42.684252  631473 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=default-k8s-diff-port-759234 default-k8s-diff-port-759234
	I1217 20:00:42.733976  631473 network_create.go:108] docker network default-k8s-diff-port-759234 192.168.94.0/24 created
	I1217 20:00:42.734006  631473 kic.go:121] calculated static IP "192.168.94.2" for the "default-k8s-diff-port-759234" container
	I1217 20:00:42.734062  631473 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1217 20:00:42.752583  631473 cli_runner.go:164] Run: docker volume create default-k8s-diff-port-759234 --label name.minikube.sigs.k8s.io=default-k8s-diff-port-759234 --label created_by.minikube.sigs.k8s.io=true
	I1217 20:00:42.773596  631473 oci.go:103] Successfully created a docker volume default-k8s-diff-port-759234
	I1217 20:00:42.773686  631473 cli_runner.go:164] Run: docker run --rm --name default-k8s-diff-port-759234-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-759234 --entrypoint /usr/bin/test -v default-k8s-diff-port-759234:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 -d /var/lib
	I1217 20:00:43.205798  631473 oci.go:107] Successfully prepared a docker volume default-k8s-diff-port-759234
	I1217 20:00:43.205868  631473 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1217 20:00:43.205880  631473 kic.go:194] Starting extracting preloaded images to volume ...
	I1217 20:00:43.205970  631473 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22186-372245/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-759234:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 -I lz4 -xf /preloaded.tar -C /extractDir
	I1217 20:00:47.198577  631473 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22186-372245/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-759234:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 -I lz4 -xf /preloaded.tar -C /extractDir: (3.992562765s)
	I1217 20:00:47.198609  631473 kic.go:203] duration metric: took 3.992725296s to extract preloaded images to volume ...
	W1217 20:00:47.198694  631473 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1217 20:00:47.198723  631473 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1217 20:00:47.198767  631473 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1217 20:00:47.260923  631473 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname default-k8s-diff-port-759234 --name default-k8s-diff-port-759234 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-759234 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=default-k8s-diff-port-759234 --network default-k8s-diff-port-759234 --ip 192.168.94.2 --volume default-k8s-diff-port-759234:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8444 --publish=127.0.0.1::8444 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0
	W1217 20:00:46.572829  625400 pod_ready.go:104] pod "coredns-5dd5756b68-gbhs5" is not "Ready", error: <nil>
	W1217 20:00:49.072264  625400 pod_ready.go:104] pod "coredns-5dd5756b68-gbhs5" is not "Ready", error: <nil>
	W1217 20:00:46.605455  624471 pod_ready.go:104] pod "coredns-7d764666f9-988jw" is not "Ready", error: <nil>
	W1217 20:00:49.104308  624471 pod_ready.go:104] pod "coredns-7d764666f9-988jw" is not "Ready", error: <nil>
	I1217 20:00:47.261698  596882 cri.go:89] found id: "26afbca819064c614a7c269e4fbe3f73beb12920c9989c7a9adca8a87b8aee29"
	I1217 20:00:47.261722  596882 cri.go:89] found id: ""
	I1217 20:00:47.261733  596882 logs.go:282] 1 containers: [26afbca819064c614a7c269e4fbe3f73beb12920c9989c7a9adca8a87b8aee29]
	I1217 20:00:47.261790  596882 ssh_runner.go:195] Run: which crictl
	I1217 20:00:47.267357  596882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 20:00:47.267438  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 20:00:47.306726  596882 cri.go:89] found id: ""
	I1217 20:00:47.306759  596882 logs.go:282] 0 containers: []
	W1217 20:00:47.306770  596882 logs.go:284] No container was found matching "kube-proxy"
	I1217 20:00:47.306778  596882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 20:00:47.306842  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 20:00:47.340875  596882 cri.go:89] found id: "deb0ef3d09cc535bcd10a8ecc98a8afc0243fdcaf4256b36cc91b5d3e2c3810c"
	I1217 20:00:47.340912  596882 cri.go:89] found id: ""
	I1217 20:00:47.340924  596882 logs.go:282] 1 containers: [deb0ef3d09cc535bcd10a8ecc98a8afc0243fdcaf4256b36cc91b5d3e2c3810c]
	I1217 20:00:47.341135  596882 ssh_runner.go:195] Run: which crictl
	I1217 20:00:47.345736  596882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 20:00:47.345806  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 20:00:47.376962  596882 cri.go:89] found id: ""
	I1217 20:00:47.377012  596882 logs.go:282] 0 containers: []
	W1217 20:00:47.377025  596882 logs.go:284] No container was found matching "kindnet"
	I1217 20:00:47.377032  596882 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1217 20:00:47.377124  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 20:00:47.407325  596882 cri.go:89] found id: ""
	I1217 20:00:47.407359  596882 logs.go:282] 0 containers: []
	W1217 20:00:47.407374  596882 logs.go:284] No container was found matching "storage-provisioner"
	I1217 20:00:47.407387  596882 logs.go:123] Gathering logs for describe nodes ...
	I1217 20:00:47.407408  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 20:00:47.473703  596882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 20:00:47.473725  596882 logs.go:123] Gathering logs for kube-apiserver [6822d1aff73905867cd00c8bd3d996a8d98a37c238f53bab351d576f0d6b34fc] ...
	I1217 20:00:47.473743  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6822d1aff73905867cd00c8bd3d996a8d98a37c238f53bab351d576f0d6b34fc"
	I1217 20:00:47.508764  596882 logs.go:123] Gathering logs for kube-scheduler [26afbca819064c614a7c269e4fbe3f73beb12920c9989c7a9adca8a87b8aee29] ...
	I1217 20:00:47.508811  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 26afbca819064c614a7c269e4fbe3f73beb12920c9989c7a9adca8a87b8aee29"
	I1217 20:00:47.539065  596882 logs.go:123] Gathering logs for kube-controller-manager [deb0ef3d09cc535bcd10a8ecc98a8afc0243fdcaf4256b36cc91b5d3e2c3810c] ...
	I1217 20:00:47.539113  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 deb0ef3d09cc535bcd10a8ecc98a8afc0243fdcaf4256b36cc91b5d3e2c3810c"
	I1217 20:00:47.571543  596882 logs.go:123] Gathering logs for CRI-O ...
	I1217 20:00:47.571587  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 20:00:47.643416  596882 logs.go:123] Gathering logs for container status ...
	I1217 20:00:47.643456  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 20:00:47.689273  596882 logs.go:123] Gathering logs for kubelet ...
	I1217 20:00:47.689316  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 20:00:47.823222  596882 logs.go:123] Gathering logs for dmesg ...
	I1217 20:00:47.823260  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 20:00:50.347237  596882 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1217 20:00:50.347659  596882 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 20:00:50.347717  596882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 20:00:50.348197  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 20:00:50.391187  596882 cri.go:89] found id: "6822d1aff73905867cd00c8bd3d996a8d98a37c238f53bab351d576f0d6b34fc"
	I1217 20:00:50.391339  596882 cri.go:89] found id: ""
	I1217 20:00:50.391419  596882 logs.go:282] 1 containers: [6822d1aff73905867cd00c8bd3d996a8d98a37c238f53bab351d576f0d6b34fc]
	I1217 20:00:50.391505  596882 ssh_runner.go:195] Run: which crictl
	I1217 20:00:50.396902  596882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 20:00:50.397015  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 20:00:50.441286  596882 cri.go:89] found id: ""
	I1217 20:00:50.441360  596882 logs.go:282] 0 containers: []
	W1217 20:00:50.441373  596882 logs.go:284] No container was found matching "etcd"
	I1217 20:00:50.441389  596882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 20:00:50.441452  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 20:00:50.479045  596882 cri.go:89] found id: ""
	I1217 20:00:50.479088  596882 logs.go:282] 0 containers: []
	W1217 20:00:50.479100  596882 logs.go:284] No container was found matching "coredns"
	I1217 20:00:50.479108  596882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 20:00:50.479174  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 20:00:50.515926  596882 cri.go:89] found id: "26afbca819064c614a7c269e4fbe3f73beb12920c9989c7a9adca8a87b8aee29"
	I1217 20:00:50.516275  596882 cri.go:89] found id: ""
	I1217 20:00:50.516295  596882 logs.go:282] 1 containers: [26afbca819064c614a7c269e4fbe3f73beb12920c9989c7a9adca8a87b8aee29]
	I1217 20:00:50.516365  596882 ssh_runner.go:195] Run: which crictl
	I1217 20:00:50.522153  596882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 20:00:50.522238  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 20:00:50.562124  596882 cri.go:89] found id: ""
	I1217 20:00:50.562187  596882 logs.go:282] 0 containers: []
	W1217 20:00:50.562199  596882 logs.go:284] No container was found matching "kube-proxy"
	I1217 20:00:50.562208  596882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 20:00:50.562277  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 20:00:50.601222  596882 cri.go:89] found id: "deb0ef3d09cc535bcd10a8ecc98a8afc0243fdcaf4256b36cc91b5d3e2c3810c"
	I1217 20:00:50.601377  596882 cri.go:89] found id: ""
	I1217 20:00:50.601396  596882 logs.go:282] 1 containers: [deb0ef3d09cc535bcd10a8ecc98a8afc0243fdcaf4256b36cc91b5d3e2c3810c]
	I1217 20:00:50.601522  596882 ssh_runner.go:195] Run: which crictl
	I1217 20:00:50.607093  596882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 20:00:50.607179  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 20:00:50.643677  596882 cri.go:89] found id: ""
	I1217 20:00:50.643709  596882 logs.go:282] 0 containers: []
	W1217 20:00:50.643725  596882 logs.go:284] No container was found matching "kindnet"
	I1217 20:00:50.643734  596882 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1217 20:00:50.643810  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 20:00:50.683346  596882 cri.go:89] found id: ""
	I1217 20:00:50.683378  596882 logs.go:282] 0 containers: []
	W1217 20:00:50.683389  596882 logs.go:284] No container was found matching "storage-provisioner"
	I1217 20:00:50.683402  596882 logs.go:123] Gathering logs for kubelet ...
	I1217 20:00:50.683418  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 20:00:50.807284  596882 logs.go:123] Gathering logs for dmesg ...
	I1217 20:00:50.807323  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 20:00:50.829965  596882 logs.go:123] Gathering logs for describe nodes ...
	I1217 20:00:50.830005  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 20:00:50.903560  596882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 20:00:50.903583  596882 logs.go:123] Gathering logs for kube-apiserver [6822d1aff73905867cd00c8bd3d996a8d98a37c238f53bab351d576f0d6b34fc] ...
	I1217 20:00:50.903608  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6822d1aff73905867cd00c8bd3d996a8d98a37c238f53bab351d576f0d6b34fc"
	I1217 20:00:50.952336  596882 logs.go:123] Gathering logs for kube-scheduler [26afbca819064c614a7c269e4fbe3f73beb12920c9989c7a9adca8a87b8aee29] ...
	I1217 20:00:50.952375  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 26afbca819064c614a7c269e4fbe3f73beb12920c9989c7a9adca8a87b8aee29"
	I1217 20:00:50.986508  596882 logs.go:123] Gathering logs for kube-controller-manager [deb0ef3d09cc535bcd10a8ecc98a8afc0243fdcaf4256b36cc91b5d3e2c3810c] ...
	I1217 20:00:50.986545  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 deb0ef3d09cc535bcd10a8ecc98a8afc0243fdcaf4256b36cc91b5d3e2c3810c"
	I1217 20:00:51.022486  596882 logs.go:123] Gathering logs for CRI-O ...
	I1217 20:00:51.022517  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 20:00:51.088659  596882 logs.go:123] Gathering logs for container status ...
	I1217 20:00:51.088715  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 20:00:47.583096  631473 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-759234 --format={{.State.Running}}
	I1217 20:00:47.608914  631473 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-759234 --format={{.State.Status}}
	I1217 20:00:47.634283  631473 cli_runner.go:164] Run: docker exec default-k8s-diff-port-759234 stat /var/lib/dpkg/alternatives/iptables
	I1217 20:00:47.694519  631473 oci.go:144] the created container "default-k8s-diff-port-759234" has a running status.
	I1217 20:00:47.694556  631473 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22186-372245/.minikube/machines/default-k8s-diff-port-759234/id_rsa...
	I1217 20:00:47.741322  631473 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22186-372245/.minikube/machines/default-k8s-diff-port-759234/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1217 20:00:47.777682  631473 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-759234 --format={{.State.Status}}
	I1217 20:00:47.801570  631473 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1217 20:00:47.801595  631473 kic_runner.go:114] Args: [docker exec --privileged default-k8s-diff-port-759234 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1217 20:00:47.858176  631473 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-759234 --format={{.State.Status}}
	I1217 20:00:47.886441  631473 machine.go:94] provisionDockerMachine start ...
	I1217 20:00:47.886562  631473 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-759234
	I1217 20:00:47.913250  631473 main.go:143] libmachine: Using SSH client type: native
	I1217 20:00:47.913628  631473 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33453 <nil> <nil>}
	I1217 20:00:47.913655  631473 main.go:143] libmachine: About to run SSH command:
	hostname
	I1217 20:00:47.914572  631473 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:49044->127.0.0.1:33453: read: connection reset by peer
	I1217 20:00:51.082474  631473 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-759234
	
	I1217 20:00:51.082503  631473 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-759234"
	I1217 20:00:51.082569  631473 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-759234
	I1217 20:00:51.109173  631473 main.go:143] libmachine: Using SSH client type: native
	I1217 20:00:51.109464  631473 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33453 <nil> <nil>}
	I1217 20:00:51.109487  631473 main.go:143] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-759234 && echo "default-k8s-diff-port-759234" | sudo tee /etc/hostname
	I1217 20:00:51.282514  631473 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-759234
	
	I1217 20:00:51.282597  631473 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-759234
	I1217 20:00:51.302139  631473 main.go:143] libmachine: Using SSH client type: native
	I1217 20:00:51.302370  631473 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33453 <nil> <nil>}
	I1217 20:00:51.302388  631473 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-759234' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-759234/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-759234' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1217 20:00:51.456372  631473 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1217 20:00:51.456426  631473 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22186-372245/.minikube CaCertPath:/home/jenkins/minikube-integration/22186-372245/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22186-372245/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22186-372245/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22186-372245/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22186-372245/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22186-372245/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22186-372245/.minikube}
	I1217 20:00:51.456479  631473 ubuntu.go:190] setting up certificates
	I1217 20:00:51.456491  631473 provision.go:84] configureAuth start
	I1217 20:00:51.456563  631473 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-759234
	I1217 20:00:51.480508  631473 provision.go:143] copyHostCerts
	I1217 20:00:51.480576  631473 exec_runner.go:144] found /home/jenkins/minikube-integration/22186-372245/.minikube/key.pem, removing ...
	I1217 20:00:51.480592  631473 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22186-372245/.minikube/key.pem
	I1217 20:00:51.480669  631473 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22186-372245/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22186-372245/.minikube/key.pem (1675 bytes)
	I1217 20:00:51.480772  631473 exec_runner.go:144] found /home/jenkins/minikube-integration/22186-372245/.minikube/ca.pem, removing ...
	I1217 20:00:51.480783  631473 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22186-372245/.minikube/ca.pem
	I1217 20:00:51.480822  631473 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22186-372245/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22186-372245/.minikube/ca.pem (1082 bytes)
	I1217 20:00:51.480896  631473 exec_runner.go:144] found /home/jenkins/minikube-integration/22186-372245/.minikube/cert.pem, removing ...
	I1217 20:00:51.480906  631473 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22186-372245/.minikube/cert.pem
	I1217 20:00:51.480938  631473 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22186-372245/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22186-372245/.minikube/cert.pem (1123 bytes)
	I1217 20:00:51.481006  631473 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22186-372245/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22186-372245/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22186-372245/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-759234 san=[127.0.0.1 192.168.94.2 default-k8s-diff-port-759234 localhost minikube]
	I1217 20:00:51.633655  631473 provision.go:177] copyRemoteCerts
	I1217 20:00:51.633763  631473 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1217 20:00:51.633814  631473 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-759234
	I1217 20:00:51.658060  631473 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33453 SSHKeyPath:/home/jenkins/minikube-integration/22186-372245/.minikube/machines/default-k8s-diff-port-759234/id_rsa Username:docker}
	I1217 20:00:51.774263  631473 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1217 20:00:51.836683  631473 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1217 20:00:51.862224  631473 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1217 20:00:51.890608  631473 provision.go:87] duration metric: took 434.096039ms to configureAuth
	I1217 20:00:51.890644  631473 ubuntu.go:206] setting minikube options for container-runtime
	I1217 20:00:51.890863  631473 config.go:182] Loaded profile config "default-k8s-diff-port-759234": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 20:00:51.891022  631473 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-759234
	I1217 20:00:51.916236  631473 main.go:143] libmachine: Using SSH client type: native
	I1217 20:00:51.916552  631473 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33453 <nil> <nil>}
	I1217 20:00:51.916578  631473 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1217 20:00:52.350209  631473 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1217 20:00:52.350238  631473 machine.go:97] duration metric: took 4.46376868s to provisionDockerMachine
	I1217 20:00:52.350253  631473 client.go:176] duration metric: took 9.723424305s to LocalClient.Create
	I1217 20:00:52.350277  631473 start.go:167] duration metric: took 9.72348972s to libmachine.API.Create "default-k8s-diff-port-759234"
	I1217 20:00:52.350294  631473 start.go:293] postStartSetup for "default-k8s-diff-port-759234" (driver="docker")
	I1217 20:00:52.350305  631473 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1217 20:00:52.350383  631473 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1217 20:00:52.350429  631473 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-759234
	I1217 20:00:52.369228  631473 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33453 SSHKeyPath:/home/jenkins/minikube-integration/22186-372245/.minikube/machines/default-k8s-diff-port-759234/id_rsa Username:docker}
	I1217 20:00:52.477868  631473 ssh_runner.go:195] Run: cat /etc/os-release
	I1217 20:00:52.482314  631473 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1217 20:00:52.482357  631473 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1217 20:00:52.482372  631473 filesync.go:126] Scanning /home/jenkins/minikube-integration/22186-372245/.minikube/addons for local assets ...
	I1217 20:00:52.482454  631473 filesync.go:126] Scanning /home/jenkins/minikube-integration/22186-372245/.minikube/files for local assets ...
	I1217 20:00:52.482534  631473 filesync.go:149] local asset: /home/jenkins/minikube-integration/22186-372245/.minikube/files/etc/ssl/certs/3757972.pem -> 3757972.pem in /etc/ssl/certs
	I1217 20:00:52.482625  631473 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1217 20:00:52.491557  631473 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/files/etc/ssl/certs/3757972.pem --> /etc/ssl/certs/3757972.pem (1708 bytes)
	I1217 20:00:52.515015  631473 start.go:296] duration metric: took 164.702667ms for postStartSetup
	I1217 20:00:52.515418  631473 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-759234
	I1217 20:00:52.535477  631473 profile.go:143] Saving config to /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/default-k8s-diff-port-759234/config.json ...
	I1217 20:00:52.535813  631473 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 20:00:52.535873  631473 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-759234
	I1217 20:00:52.555517  631473 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33453 SSHKeyPath:/home/jenkins/minikube-integration/22186-372245/.minikube/machines/default-k8s-diff-port-759234/id_rsa Username:docker}
	I1217 20:00:52.657422  631473 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1217 20:00:52.662205  631473 start.go:128] duration metric: took 10.037371351s to createHost
	I1217 20:00:52.662241  631473 start.go:83] releasing machines lock for "default-k8s-diff-port-759234", held for 10.037515093s
	I1217 20:00:52.662322  631473 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-759234
	I1217 20:00:52.680193  631473 ssh_runner.go:195] Run: cat /version.json
	I1217 20:00:52.680276  631473 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1217 20:00:52.680310  631473 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-759234
	I1217 20:00:52.680347  631473 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-759234
	I1217 20:00:52.701061  631473 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33453 SSHKeyPath:/home/jenkins/minikube-integration/22186-372245/.minikube/machines/default-k8s-diff-port-759234/id_rsa Username:docker}
	I1217 20:00:52.701301  631473 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33453 SSHKeyPath:/home/jenkins/minikube-integration/22186-372245/.minikube/machines/default-k8s-diff-port-759234/id_rsa Username:docker}
	I1217 20:00:52.851661  631473 ssh_runner.go:195] Run: systemctl --version
	I1217 20:00:52.858481  631473 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1217 20:00:52.893608  631473 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1217 20:00:52.898824  631473 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1217 20:00:52.898902  631473 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1217 20:00:52.924893  631473 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1217 20:00:52.924917  631473 start.go:496] detecting cgroup driver to use...
	I1217 20:00:52.924946  631473 detect.go:190] detected "systemd" cgroup driver on host os
	I1217 20:00:52.924995  631473 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1217 20:00:52.941996  631473 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1217 20:00:52.954497  631473 docker.go:218] disabling cri-docker service (if available) ...
	I1217 20:00:52.954559  631473 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1217 20:00:52.971423  631473 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1217 20:00:52.990488  631473 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1217 20:00:53.079469  631473 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1217 20:00:53.166815  631473 docker.go:234] disabling docker service ...
	I1217 20:00:53.166878  631473 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1217 20:00:53.186920  631473 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1217 20:00:53.200855  631473 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1217 20:00:53.290366  631473 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1217 20:00:53.387334  631473 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1217 20:00:53.400172  631473 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1217 20:00:53.415056  631473 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1217 20:00:53.415136  631473 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:00:53.425540  631473 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1217 20:00:53.425617  631473 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:00:53.435225  631473 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:00:53.444865  631473 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:00:53.455024  631473 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1217 20:00:53.464046  631473 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:00:53.473632  631473 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:00:53.488327  631473 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:00:53.498230  631473 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1217 20:00:53.506887  631473 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1217 20:00:53.516474  631473 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 20:00:53.601252  631473 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1217 20:00:54.068135  631473 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1217 20:00:54.068217  631473 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1217 20:00:54.073472  631473 start.go:564] Will wait 60s for crictl version
	I1217 20:00:54.073554  631473 ssh_runner.go:195] Run: which crictl
	I1217 20:00:54.078383  631473 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1217 20:00:54.106787  631473 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1217 20:00:54.106878  631473 ssh_runner.go:195] Run: crio --version
	I1217 20:00:54.140042  631473 ssh_runner.go:195] Run: crio --version
	I1217 20:00:54.172909  631473 out.go:179] * Preparing Kubernetes v1.34.3 on CRI-O 1.34.3 ...
	W1217 20:00:51.073128  625400 pod_ready.go:104] pod "coredns-5dd5756b68-gbhs5" is not "Ready", error: <nil>
	W1217 20:00:53.572242  625400 pod_ready.go:104] pod "coredns-5dd5756b68-gbhs5" is not "Ready", error: <nil>
	W1217 20:00:51.105457  624471 pod_ready.go:104] pod "coredns-7d764666f9-988jw" is not "Ready", error: <nil>
	W1217 20:00:53.606663  624471 pod_ready.go:104] pod "coredns-7d764666f9-988jw" is not "Ready", error: <nil>
	I1217 20:00:53.632189  596882 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1217 20:00:53.632791  596882 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 20:00:53.632867  596882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 20:00:53.632941  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 20:00:53.662308  596882 cri.go:89] found id: "6822d1aff73905867cd00c8bd3d996a8d98a37c238f53bab351d576f0d6b34fc"
	I1217 20:00:53.662339  596882 cri.go:89] found id: ""
	I1217 20:00:53.662350  596882 logs.go:282] 1 containers: [6822d1aff73905867cd00c8bd3d996a8d98a37c238f53bab351d576f0d6b34fc]
	I1217 20:00:53.662420  596882 ssh_runner.go:195] Run: which crictl
	I1217 20:00:53.666413  596882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 20:00:53.666495  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 20:00:53.695377  596882 cri.go:89] found id: ""
	I1217 20:00:53.695409  596882 logs.go:282] 0 containers: []
	W1217 20:00:53.695421  596882 logs.go:284] No container was found matching "etcd"
	I1217 20:00:53.695429  596882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 20:00:53.695516  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 20:00:53.724146  596882 cri.go:89] found id: ""
	I1217 20:00:53.724177  596882 logs.go:282] 0 containers: []
	W1217 20:00:53.724187  596882 logs.go:284] No container was found matching "coredns"
	I1217 20:00:53.724252  596882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 20:00:53.724349  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 20:00:53.752962  596882 cri.go:89] found id: "26afbca819064c614a7c269e4fbe3f73beb12920c9989c7a9adca8a87b8aee29"
	I1217 20:00:53.752990  596882 cri.go:89] found id: ""
	I1217 20:00:53.753000  596882 logs.go:282] 1 containers: [26afbca819064c614a7c269e4fbe3f73beb12920c9989c7a9adca8a87b8aee29]
	I1217 20:00:53.753058  596882 ssh_runner.go:195] Run: which crictl
	I1217 20:00:53.757461  596882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 20:00:53.757549  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 20:00:53.785748  596882 cri.go:89] found id: ""
	I1217 20:00:53.785774  596882 logs.go:282] 0 containers: []
	W1217 20:00:53.785785  596882 logs.go:284] No container was found matching "kube-proxy"
	I1217 20:00:53.785792  596882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 20:00:53.785862  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 20:00:53.815860  596882 cri.go:89] found id: "deb0ef3d09cc535bcd10a8ecc98a8afc0243fdcaf4256b36cc91b5d3e2c3810c"
	I1217 20:00:53.815889  596882 cri.go:89] found id: ""
	I1217 20:00:53.815899  596882 logs.go:282] 1 containers: [deb0ef3d09cc535bcd10a8ecc98a8afc0243fdcaf4256b36cc91b5d3e2c3810c]
	I1217 20:00:53.815952  596882 ssh_runner.go:195] Run: which crictl
	I1217 20:00:53.820565  596882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 20:00:53.820632  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 20:00:53.847814  596882 cri.go:89] found id: ""
	I1217 20:00:53.847839  596882 logs.go:282] 0 containers: []
	W1217 20:00:53.847850  596882 logs.go:284] No container was found matching "kindnet"
	I1217 20:00:53.847857  596882 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1217 20:00:53.847920  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 20:00:53.876185  596882 cri.go:89] found id: ""
	I1217 20:00:53.876218  596882 logs.go:282] 0 containers: []
	W1217 20:00:53.876230  596882 logs.go:284] No container was found matching "storage-provisioner"
	I1217 20:00:53.876244  596882 logs.go:123] Gathering logs for kubelet ...
	I1217 20:00:53.876259  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 20:00:53.971642  596882 logs.go:123] Gathering logs for dmesg ...
	I1217 20:00:53.971693  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 20:00:53.990638  596882 logs.go:123] Gathering logs for describe nodes ...
	I1217 20:00:53.990675  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 20:00:54.050668  596882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 20:00:54.050692  596882 logs.go:123] Gathering logs for kube-apiserver [6822d1aff73905867cd00c8bd3d996a8d98a37c238f53bab351d576f0d6b34fc] ...
	I1217 20:00:54.050707  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6822d1aff73905867cd00c8bd3d996a8d98a37c238f53bab351d576f0d6b34fc"
	I1217 20:00:54.084846  596882 logs.go:123] Gathering logs for kube-scheduler [26afbca819064c614a7c269e4fbe3f73beb12920c9989c7a9adca8a87b8aee29] ...
	I1217 20:00:54.084893  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 26afbca819064c614a7c269e4fbe3f73beb12920c9989c7a9adca8a87b8aee29"
	I1217 20:00:54.115061  596882 logs.go:123] Gathering logs for kube-controller-manager [deb0ef3d09cc535bcd10a8ecc98a8afc0243fdcaf4256b36cc91b5d3e2c3810c] ...
	I1217 20:00:54.115108  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 deb0ef3d09cc535bcd10a8ecc98a8afc0243fdcaf4256b36cc91b5d3e2c3810c"
	I1217 20:00:54.146463  596882 logs.go:123] Gathering logs for CRI-O ...
	I1217 20:00:54.146491  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 20:00:54.199121  596882 logs.go:123] Gathering logs for container status ...
	I1217 20:00:54.199159  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 20:00:56.736153  596882 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1217 20:00:56.736638  596882 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 20:00:56.736693  596882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 20:00:56.736746  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 20:00:56.765576  596882 cri.go:89] found id: "6822d1aff73905867cd00c8bd3d996a8d98a37c238f53bab351d576f0d6b34fc"
	I1217 20:00:56.765600  596882 cri.go:89] found id: ""
	I1217 20:00:56.765610  596882 logs.go:282] 1 containers: [6822d1aff73905867cd00c8bd3d996a8d98a37c238f53bab351d576f0d6b34fc]
	I1217 20:00:56.765676  596882 ssh_runner.go:195] Run: which crictl
	I1217 20:00:56.769942  596882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 20:00:56.770013  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 20:00:56.798112  596882 cri.go:89] found id: ""
	I1217 20:00:56.798145  596882 logs.go:282] 0 containers: []
	W1217 20:00:56.798157  596882 logs.go:284] No container was found matching "etcd"
	I1217 20:00:56.798165  596882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 20:00:56.798234  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 20:00:56.825167  596882 cri.go:89] found id: ""
	I1217 20:00:56.825200  596882 logs.go:282] 0 containers: []
	W1217 20:00:56.825231  596882 logs.go:284] No container was found matching "coredns"
	I1217 20:00:56.825247  596882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 20:00:56.825311  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 20:00:56.852568  596882 cri.go:89] found id: "26afbca819064c614a7c269e4fbe3f73beb12920c9989c7a9adca8a87b8aee29"
	I1217 20:00:56.852592  596882 cri.go:89] found id: ""
	I1217 20:00:56.852602  596882 logs.go:282] 1 containers: [26afbca819064c614a7c269e4fbe3f73beb12920c9989c7a9adca8a87b8aee29]
	I1217 20:00:56.852661  596882 ssh_runner.go:195] Run: which crictl
	I1217 20:00:56.856829  596882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 20:00:56.856902  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 20:00:56.883929  596882 cri.go:89] found id: ""
	I1217 20:00:56.883973  596882 logs.go:282] 0 containers: []
	W1217 20:00:56.883986  596882 logs.go:284] No container was found matching "kube-proxy"
	I1217 20:00:56.883999  596882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 20:00:56.884062  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 20:00:56.911693  596882 cri.go:89] found id: "deb0ef3d09cc535bcd10a8ecc98a8afc0243fdcaf4256b36cc91b5d3e2c3810c"
	I1217 20:00:56.911714  596882 cri.go:89] found id: ""
	I1217 20:00:56.911722  596882 logs.go:282] 1 containers: [deb0ef3d09cc535bcd10a8ecc98a8afc0243fdcaf4256b36cc91b5d3e2c3810c]
	I1217 20:00:56.911772  596882 ssh_runner.go:195] Run: which crictl
	I1217 20:00:56.916212  596882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 20:00:56.916276  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 20:00:56.942585  596882 cri.go:89] found id: ""
	I1217 20:00:56.942617  596882 logs.go:282] 0 containers: []
	W1217 20:00:56.942633  596882 logs.go:284] No container was found matching "kindnet"
	I1217 20:00:56.942642  596882 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1217 20:00:56.942700  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 20:00:56.971939  596882 cri.go:89] found id: ""
	I1217 20:00:56.971976  596882 logs.go:282] 0 containers: []
	W1217 20:00:56.971990  596882 logs.go:284] No container was found matching "storage-provisioner"
	I1217 20:00:56.972004  596882 logs.go:123] Gathering logs for kube-scheduler [26afbca819064c614a7c269e4fbe3f73beb12920c9989c7a9adca8a87b8aee29] ...
	I1217 20:00:56.972024  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 26afbca819064c614a7c269e4fbe3f73beb12920c9989c7a9adca8a87b8aee29"
	I1217 20:00:57.001777  596882 logs.go:123] Gathering logs for kube-controller-manager [deb0ef3d09cc535bcd10a8ecc98a8afc0243fdcaf4256b36cc91b5d3e2c3810c] ...
	I1217 20:00:57.001806  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 deb0ef3d09cc535bcd10a8ecc98a8afc0243fdcaf4256b36cc91b5d3e2c3810c"
	I1217 20:00:57.032936  596882 logs.go:123] Gathering logs for CRI-O ...
	I1217 20:00:57.032965  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 20:00:57.078327  596882 logs.go:123] Gathering logs for container status ...
	I1217 20:00:57.078364  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 20:00:57.113176  596882 logs.go:123] Gathering logs for kubelet ...
	I1217 20:00:57.113213  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 20:00:57.201920  596882 logs.go:123] Gathering logs for dmesg ...
	I1217 20:00:57.201957  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 20:00:57.218426  596882 logs.go:123] Gathering logs for describe nodes ...
	I1217 20:00:57.218456  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1217 20:00:54.174562  631473 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-759234 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1217 20:00:54.194566  631473 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1217 20:00:54.199116  631473 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 20:00:54.210935  631473 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-759234 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:default-k8s-diff-port-759234 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false C
ustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1217 20:00:54.211103  631473 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1217 20:00:54.211184  631473 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 20:00:54.248494  631473 crio.go:514] all images are preloaded for cri-o runtime.
	I1217 20:00:54.248518  631473 crio.go:433] Images already preloaded, skipping extraction
	I1217 20:00:54.248568  631473 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 20:00:54.273697  631473 crio.go:514] all images are preloaded for cri-o runtime.
	I1217 20:00:54.273726  631473 cache_images.go:86] Images are preloaded, skipping loading
	I1217 20:00:54.273735  631473 kubeadm.go:935] updating node { 192.168.94.2 8444 v1.34.3 crio true true} ...
	I1217 20:00:54.273832  631473 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-759234 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.3 ClusterName:default-k8s-diff-port-759234 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1217 20:00:54.273935  631473 ssh_runner.go:195] Run: crio config
	I1217 20:00:54.323646  631473 cni.go:84] Creating CNI manager for ""
	I1217 20:00:54.323671  631473 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1217 20:00:54.323691  631473 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1217 20:00:54.323723  631473 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8444 KubernetesVersion:v1.34.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-759234 NodeName:default-k8s-diff-port-759234 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1217 20:00:54.323843  631473 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-759234"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1217 20:00:54.323910  631473 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.3
	I1217 20:00:54.333287  631473 binaries.go:51] Found k8s binaries, skipping transfer
	I1217 20:00:54.333359  631473 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1217 20:00:54.341865  631473 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1217 20:00:54.355367  631473 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1217 20:00:54.370136  631473 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2224 bytes)
	I1217 20:00:54.383695  631473 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1217 20:00:54.387416  631473 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 20:00:54.397752  631473 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 20:00:54.478375  631473 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 20:00:54.502901  631473 certs.go:69] Setting up /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/default-k8s-diff-port-759234 for IP: 192.168.94.2
	I1217 20:00:54.502928  631473 certs.go:195] generating shared ca certs ...
	I1217 20:00:54.502956  631473 certs.go:227] acquiring lock for ca certs: {Name:mk6c0a4a99609de13fb0b54aca94f9165cc7856c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 20:00:54.503145  631473 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22186-372245/.minikube/ca.key
	I1217 20:00:54.503202  631473 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22186-372245/.minikube/proxy-client-ca.key
	I1217 20:00:54.503217  631473 certs.go:257] generating profile certs ...
	I1217 20:00:54.503295  631473 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/default-k8s-diff-port-759234/client.key
	I1217 20:00:54.503322  631473 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/default-k8s-diff-port-759234/client.crt with IP's: []
	I1217 20:00:54.617711  631473 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/default-k8s-diff-port-759234/client.crt ...
	I1217 20:00:54.617747  631473 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/default-k8s-diff-port-759234/client.crt: {Name:mk5d78d7f68addaf1f73847c6c02fd442f5e6ddd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 20:00:54.617930  631473 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/default-k8s-diff-port-759234/client.key ...
	I1217 20:00:54.617950  631473 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/default-k8s-diff-port-759234/client.key: {Name:mke8a415d0af374cf9fe8570e6fe4c7202332109 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 20:00:54.618032  631473 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/default-k8s-diff-port-759234/apiserver.key.e1807167
	I1217 20:00:54.618049  631473 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/default-k8s-diff-port-759234/apiserver.crt.e1807167 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.94.2]
	I1217 20:00:54.665685  631473 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/default-k8s-diff-port-759234/apiserver.crt.e1807167 ...
	I1217 20:00:54.665716  631473 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/default-k8s-diff-port-759234/apiserver.crt.e1807167: {Name:mkfcccc5ab764237ebc01d7e772bd39ad2e57805 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 20:00:54.665884  631473 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/default-k8s-diff-port-759234/apiserver.key.e1807167 ...
	I1217 20:00:54.665904  631473 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/default-k8s-diff-port-759234/apiserver.key.e1807167: {Name:mk4c6de11c85c3fb77bd1f278ce0e0fd2b33aff3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 20:00:54.666008  631473 certs.go:382] copying /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/default-k8s-diff-port-759234/apiserver.crt.e1807167 -> /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/default-k8s-diff-port-759234/apiserver.crt
	I1217 20:00:54.666104  631473 certs.go:386] copying /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/default-k8s-diff-port-759234/apiserver.key.e1807167 -> /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/default-k8s-diff-port-759234/apiserver.key
	I1217 20:00:54.666162  631473 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/default-k8s-diff-port-759234/proxy-client.key
	I1217 20:00:54.666178  631473 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/default-k8s-diff-port-759234/proxy-client.crt with IP's: []
	I1217 20:00:54.735423  631473 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/default-k8s-diff-port-759234/proxy-client.crt ...
	I1217 20:00:54.735452  631473 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/default-k8s-diff-port-759234/proxy-client.crt: {Name:mk6946a87226d60c386ab3fc364ed99a58d10cba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 20:00:54.735624  631473 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/default-k8s-diff-port-759234/proxy-client.key ...
	I1217 20:00:54.735638  631473 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/default-k8s-diff-port-759234/proxy-client.key: {Name:mk6cae84f91184f3a12c3274f32b7e32ae6eea78 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 20:00:54.735804  631473 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-372245/.minikube/certs/375797.pem (1338 bytes)
	W1217 20:00:54.735844  631473 certs.go:480] ignoring /home/jenkins/minikube-integration/22186-372245/.minikube/certs/375797_empty.pem, impossibly tiny 0 bytes
	I1217 20:00:54.735855  631473 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-372245/.minikube/certs/ca-key.pem (1675 bytes)
	I1217 20:00:54.735877  631473 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-372245/.minikube/certs/ca.pem (1082 bytes)
	I1217 20:00:54.735901  631473 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-372245/.minikube/certs/cert.pem (1123 bytes)
	I1217 20:00:54.735925  631473 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-372245/.minikube/certs/key.pem (1675 bytes)
	I1217 20:00:54.735974  631473 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-372245/.minikube/files/etc/ssl/certs/3757972.pem (1708 bytes)
	I1217 20:00:54.736625  631473 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1217 20:00:54.756198  631473 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1217 20:00:54.773753  631473 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1217 20:00:54.791250  631473 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1217 20:00:54.809439  631473 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/default-k8s-diff-port-759234/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1217 20:00:54.828101  631473 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/default-k8s-diff-port-759234/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1217 20:00:54.847713  631473 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/default-k8s-diff-port-759234/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1217 20:00:54.866560  631473 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/default-k8s-diff-port-759234/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1217 20:00:54.885184  631473 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/files/etc/ssl/certs/3757972.pem --> /usr/share/ca-certificates/3757972.pem (1708 bytes)
	I1217 20:00:54.906455  631473 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 20:00:54.924265  631473 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/certs/375797.pem --> /usr/share/ca-certificates/375797.pem (1338 bytes)
	I1217 20:00:54.942817  631473 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1217 20:00:54.956309  631473 ssh_runner.go:195] Run: openssl version
	I1217 20:00:54.962641  631473 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3757972.pem
	I1217 20:00:54.971170  631473 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3757972.pem /etc/ssl/certs/3757972.pem
	I1217 20:00:54.979233  631473 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3757972.pem
	I1217 20:00:54.983177  631473 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 17 19:32 /usr/share/ca-certificates/3757972.pem
	I1217 20:00:54.983245  631473 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3757972.pem
	I1217 20:00:55.018977  631473 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1217 20:00:55.027253  631473 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/3757972.pem /etc/ssl/certs/3ec20f2e.0
	I1217 20:00:55.035165  631473 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:00:55.043017  631473 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1217 20:00:55.051440  631473 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:00:55.055458  631473 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 19:24 /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:00:55.055523  631473 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:00:55.092379  631473 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1217 20:00:55.101231  631473 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1217 20:00:55.111064  631473 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/375797.pem
	I1217 20:00:55.119199  631473 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/375797.pem /etc/ssl/certs/375797.pem
	I1217 20:00:55.127063  631473 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/375797.pem
	I1217 20:00:55.130993  631473 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 17 19:32 /usr/share/ca-certificates/375797.pem
	I1217 20:00:55.131062  631473 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/375797.pem
	I1217 20:00:55.165321  631473 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1217 20:00:55.173294  631473 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/375797.pem /etc/ssl/certs/51391683.0
	I1217 20:00:55.181422  631473 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1217 20:00:55.185376  631473 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1217 20:00:55.185448  631473 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-759234 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:default-k8s-diff-port-759234 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cust
omQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 20:00:55.185546  631473 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1217 20:00:55.185607  631473 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 20:00:55.217477  631473 cri.go:89] found id: ""
	I1217 20:00:55.217551  631473 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1217 20:00:55.226933  631473 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1217 20:00:55.236854  631473 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1217 20:00:55.236934  631473 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1217 20:00:55.245579  631473 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1217 20:00:55.245602  631473 kubeadm.go:158] found existing configuration files:
	
	I1217 20:00:55.245652  631473 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1217 20:00:55.253938  631473 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1217 20:00:55.253998  631473 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1217 20:00:55.261865  631473 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1217 20:00:55.269887  631473 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1217 20:00:55.269992  631473 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1217 20:00:55.278000  631473 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1217 20:00:55.286714  631473 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1217 20:00:55.286788  631473 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1217 20:00:55.296035  631473 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1217 20:00:55.305037  631473 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1217 20:00:55.305131  631473 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1217 20:00:55.312998  631473 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1217 20:00:55.373971  631473 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1045-gcp\n", err: exit status 1
	I1217 20:00:55.436480  631473 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W1217 20:00:56.071929  625400 pod_ready.go:104] pod "coredns-5dd5756b68-gbhs5" is not "Ready", error: <nil>
	W1217 20:00:58.571128  625400 pod_ready.go:104] pod "coredns-5dd5756b68-gbhs5" is not "Ready", error: <nil>
	W1217 20:00:56.104574  624471 pod_ready.go:104] pod "coredns-7d764666f9-988jw" is not "Ready", error: <nil>
	W1217 20:00:58.604838  624471 pod_ready.go:104] pod "coredns-7d764666f9-988jw" is not "Ready", error: <nil>
	W1217 20:00:57.277327  596882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 20:00:57.277349  596882 logs.go:123] Gathering logs for kube-apiserver [6822d1aff73905867cd00c8bd3d996a8d98a37c238f53bab351d576f0d6b34fc] ...
	I1217 20:00:57.277366  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6822d1aff73905867cd00c8bd3d996a8d98a37c238f53bab351d576f0d6b34fc"
	I1217 20:00:59.811179  596882 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	W1217 20:01:01.071960  625400 pod_ready.go:104] pod "coredns-5dd5756b68-gbhs5" is not "Ready", error: <nil>
	W1217 20:01:03.571727  625400 pod_ready.go:104] pod "coredns-5dd5756b68-gbhs5" is not "Ready", error: <nil>
	W1217 20:01:00.604975  624471 pod_ready.go:104] pod "coredns-7d764666f9-988jw" is not "Ready", error: <nil>
	W1217 20:01:02.605263  624471 pod_ready.go:104] pod "coredns-7d764666f9-988jw" is not "Ready", error: <nil>
	W1217 20:01:05.106561  624471 pod_ready.go:104] pod "coredns-7d764666f9-988jw" is not "Ready", error: <nil>
	I1217 20:01:06.067126  631473 kubeadm.go:319] [init] Using Kubernetes version: v1.34.3
	I1217 20:01:06.067196  631473 kubeadm.go:319] [preflight] Running pre-flight checks
	I1217 20:01:06.067312  631473 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1217 20:01:06.067401  631473 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1045-gcp
	I1217 20:01:06.067442  631473 kubeadm.go:319] OS: Linux
	I1217 20:01:06.067513  631473 kubeadm.go:319] CGROUPS_CPU: enabled
	I1217 20:01:06.067558  631473 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1217 20:01:06.067635  631473 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1217 20:01:06.067697  631473 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1217 20:01:06.067738  631473 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1217 20:01:06.067813  631473 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1217 20:01:06.067880  631473 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1217 20:01:06.067957  631473 kubeadm.go:319] CGROUPS_IO: enabled
	I1217 20:01:06.068050  631473 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1217 20:01:06.068197  631473 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1217 20:01:06.068340  631473 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1217 20:01:06.068462  631473 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1217 20:01:06.070305  631473 out.go:252]   - Generating certificates and keys ...
	I1217 20:01:06.070395  631473 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1217 20:01:06.070458  631473 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1217 20:01:06.070524  631473 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1217 20:01:06.070580  631473 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1217 20:01:06.070634  631473 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1217 20:01:06.070675  631473 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1217 20:01:06.070722  631473 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1217 20:01:06.070887  631473 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [default-k8s-diff-port-759234 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1217 20:01:06.070954  631473 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1217 20:01:06.071106  631473 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [default-k8s-diff-port-759234 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1217 20:01:06.071215  631473 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1217 20:01:06.071290  631473 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1217 20:01:06.071343  631473 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1217 20:01:06.071423  631473 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1217 20:01:06.071499  631473 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1217 20:01:06.071573  631473 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1217 20:01:06.071647  631473 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1217 20:01:06.071757  631473 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1217 20:01:06.071841  631473 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1217 20:01:06.071959  631473 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1217 20:01:06.072065  631473 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1217 20:01:06.073367  631473 out.go:252]   - Booting up control plane ...
	I1217 20:01:06.073455  631473 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1217 20:01:06.073530  631473 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1217 20:01:06.073591  631473 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1217 20:01:06.073692  631473 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1217 20:01:06.073789  631473 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1217 20:01:06.073886  631473 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1217 20:01:06.073960  631473 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1217 20:01:06.074002  631473 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1217 20:01:06.074140  631473 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1217 20:01:06.074228  631473 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1217 20:01:06.074276  631473 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001922128s
	I1217 20:01:06.074352  631473 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1217 20:01:06.074416  631473 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.94.2:8444/livez
	I1217 20:01:06.074487  631473 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1217 20:01:06.074549  631473 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1217 20:01:06.074624  631473 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.929603333s
	I1217 20:01:06.074691  631473 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.11807832s
	I1217 20:01:06.074783  631473 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.002138646s
	I1217 20:01:06.074883  631473 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1217 20:01:06.074999  631473 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1217 20:01:06.075046  631473 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1217 20:01:06.075233  631473 kubeadm.go:319] [mark-control-plane] Marking the node default-k8s-diff-port-759234 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1217 20:01:06.075296  631473 kubeadm.go:319] [bootstrap-token] Using token: v6m366.ufgpfn05m87tgdpr
	I1217 20:01:06.076758  631473 out.go:252]   - Configuring RBAC rules ...
	I1217 20:01:06.076848  631473 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1217 20:01:06.076928  631473 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1217 20:01:06.077189  631473 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1217 20:01:06.077365  631473 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1217 20:01:06.077488  631473 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1217 20:01:06.077579  631473 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1217 20:01:06.077727  631473 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1217 20:01:06.077797  631473 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1217 20:01:06.077864  631473 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1217 20:01:06.077879  631473 kubeadm.go:319] 
	I1217 20:01:06.077952  631473 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1217 20:01:06.077959  631473 kubeadm.go:319] 
	I1217 20:01:06.078019  631473 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1217 20:01:06.078028  631473 kubeadm.go:319] 
	I1217 20:01:06.078048  631473 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1217 20:01:06.078140  631473 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1217 20:01:06.078221  631473 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1217 20:01:06.078230  631473 kubeadm.go:319] 
	I1217 20:01:06.078313  631473 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1217 20:01:06.078322  631473 kubeadm.go:319] 
	I1217 20:01:06.078396  631473 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1217 20:01:06.078404  631473 kubeadm.go:319] 
	I1217 20:01:06.078487  631473 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1217 20:01:06.078589  631473 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1217 20:01:06.078685  631473 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1217 20:01:06.078694  631473 kubeadm.go:319] 
	I1217 20:01:06.078778  631473 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1217 20:01:06.078851  631473 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1217 20:01:06.078857  631473 kubeadm.go:319] 
	I1217 20:01:06.078933  631473 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8444 --token v6m366.ufgpfn05m87tgdpr \
	I1217 20:01:06.079036  631473 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:8ef867ecc15c7bd9eb9f87ba84e4b5e1f9c90bbe1fbebab60bd7b5b08cd9129f \
	I1217 20:01:06.079057  631473 kubeadm.go:319] 	--control-plane 
	I1217 20:01:06.079060  631473 kubeadm.go:319] 
	I1217 20:01:06.079150  631473 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1217 20:01:06.079160  631473 kubeadm.go:319] 
	I1217 20:01:06.079259  631473 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8444 --token v6m366.ufgpfn05m87tgdpr \
	I1217 20:01:06.079417  631473 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:8ef867ecc15c7bd9eb9f87ba84e4b5e1f9c90bbe1fbebab60bd7b5b08cd9129f 
	I1217 20:01:06.079446  631473 cni.go:84] Creating CNI manager for ""
	I1217 20:01:06.079457  631473 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1217 20:01:06.081231  631473 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1217 20:01:04.812163  596882 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1217 20:01:04.812235  596882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 20:01:04.812292  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 20:01:04.844291  596882 cri.go:89] found id: "dfcf129a23a9b4b8338549662d03dc9674e70494089b9acbd56ee28dd0e59a2e"
	I1217 20:01:04.844315  596882 cri.go:89] found id: "6822d1aff73905867cd00c8bd3d996a8d98a37c238f53bab351d576f0d6b34fc"
	I1217 20:01:04.844319  596882 cri.go:89] found id: ""
	I1217 20:01:04.844328  596882 logs.go:282] 2 containers: [dfcf129a23a9b4b8338549662d03dc9674e70494089b9acbd56ee28dd0e59a2e 6822d1aff73905867cd00c8bd3d996a8d98a37c238f53bab351d576f0d6b34fc]
	I1217 20:01:04.844385  596882 ssh_runner.go:195] Run: which crictl
	I1217 20:01:04.848366  596882 ssh_runner.go:195] Run: which crictl
	I1217 20:01:04.852177  596882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 20:01:04.852256  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 20:01:04.883987  596882 cri.go:89] found id: ""
	I1217 20:01:04.884024  596882 logs.go:282] 0 containers: []
	W1217 20:01:04.884038  596882 logs.go:284] No container was found matching "etcd"
	I1217 20:01:04.884051  596882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 20:01:04.884140  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 20:01:04.914990  596882 cri.go:89] found id: ""
	I1217 20:01:04.915020  596882 logs.go:282] 0 containers: []
	W1217 20:01:04.915031  596882 logs.go:284] No container was found matching "coredns"
	I1217 20:01:04.915040  596882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 20:01:04.915135  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 20:01:04.944932  596882 cri.go:89] found id: "26afbca819064c614a7c269e4fbe3f73beb12920c9989c7a9adca8a87b8aee29"
	I1217 20:01:04.944965  596882 cri.go:89] found id: ""
	I1217 20:01:04.944978  596882 logs.go:282] 1 containers: [26afbca819064c614a7c269e4fbe3f73beb12920c9989c7a9adca8a87b8aee29]
	I1217 20:01:04.945047  596882 ssh_runner.go:195] Run: which crictl
	I1217 20:01:04.949407  596882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 20:01:04.949476  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 20:01:04.980714  596882 cri.go:89] found id: ""
	I1217 20:01:04.980744  596882 logs.go:282] 0 containers: []
	W1217 20:01:04.980756  596882 logs.go:284] No container was found matching "kube-proxy"
	I1217 20:01:04.980765  596882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 20:01:04.980827  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 20:01:05.014278  596882 cri.go:89] found id: "711081a1b65cc9754b1a9b8fd19fce7769b6a8e65b097e062aa1703f24e1a476"
	I1217 20:01:05.014303  596882 cri.go:89] found id: "deb0ef3d09cc535bcd10a8ecc98a8afc0243fdcaf4256b36cc91b5d3e2c3810c"
	I1217 20:01:05.014306  596882 cri.go:89] found id: ""
	I1217 20:01:05.014315  596882 logs.go:282] 2 containers: [711081a1b65cc9754b1a9b8fd19fce7769b6a8e65b097e062aa1703f24e1a476 deb0ef3d09cc535bcd10a8ecc98a8afc0243fdcaf4256b36cc91b5d3e2c3810c]
	I1217 20:01:05.014369  596882 ssh_runner.go:195] Run: which crictl
	I1217 20:01:05.019212  596882 ssh_runner.go:195] Run: which crictl
	I1217 20:01:05.023605  596882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 20:01:05.023688  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 20:01:05.054178  596882 cri.go:89] found id: ""
	I1217 20:01:05.054210  596882 logs.go:282] 0 containers: []
	W1217 20:01:05.054220  596882 logs.go:284] No container was found matching "kindnet"
	I1217 20:01:05.054226  596882 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1217 20:01:05.054297  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 20:01:05.089365  596882 cri.go:89] found id: ""
	I1217 20:01:05.089398  596882 logs.go:282] 0 containers: []
	W1217 20:01:05.089410  596882 logs.go:284] No container was found matching "storage-provisioner"
	I1217 20:01:05.089432  596882 logs.go:123] Gathering logs for container status ...
	I1217 20:01:05.089451  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 20:01:05.129946  596882 logs.go:123] Gathering logs for kubelet ...
	I1217 20:01:05.129977  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 20:01:05.229093  596882 logs.go:123] Gathering logs for describe nodes ...
	I1217 20:01:05.229136  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1217 20:01:06.082676  631473 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1217 20:01:06.087568  631473 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.3/kubectl ...
	I1217 20:01:06.087588  631473 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2620 bytes)
	I1217 20:01:06.101995  631473 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1217 20:01:06.315905  631473 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1217 20:01:06.315984  631473 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 20:01:06.315984  631473 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-759234 minikube.k8s.io/updated_at=2025_12_17T20_01_06_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=2e96f676eb7e96389e85fe0658a4ede4c4ba6924 minikube.k8s.io/name=default-k8s-diff-port-759234 minikube.k8s.io/primary=true
	I1217 20:01:06.327829  631473 ops.go:34] apiserver oom_adj: -16
	I1217 20:01:06.396458  631473 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 20:01:06.897042  631473 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 20:01:07.396599  631473 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 20:01:07.604674  624471 pod_ready.go:94] pod "coredns-7d764666f9-988jw" is "Ready"
	I1217 20:01:07.604701  624471 pod_ready.go:86] duration metric: took 37.00583192s for pod "coredns-7d764666f9-988jw" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:01:07.607174  624471 pod_ready.go:83] waiting for pod "etcd-no-preload-832842" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:01:07.611282  624471 pod_ready.go:94] pod "etcd-no-preload-832842" is "Ready"
	I1217 20:01:07.611311  624471 pod_ready.go:86] duration metric: took 4.112039ms for pod "etcd-no-preload-832842" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:01:07.613297  624471 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-832842" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:01:07.617064  624471 pod_ready.go:94] pod "kube-apiserver-no-preload-832842" is "Ready"
	I1217 20:01:07.617117  624471 pod_ready.go:86] duration metric: took 3.797766ms for pod "kube-apiserver-no-preload-832842" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:01:07.619212  624471 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-832842" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:01:07.803328  624471 pod_ready.go:94] pod "kube-controller-manager-no-preload-832842" is "Ready"
	I1217 20:01:07.803357  624471 pod_ready.go:86] duration metric: took 184.117172ms for pod "kube-controller-manager-no-preload-832842" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:01:08.003550  624471 pod_ready.go:83] waiting for pod "kube-proxy-jc5dd" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:01:08.403261  624471 pod_ready.go:94] pod "kube-proxy-jc5dd" is "Ready"
	I1217 20:01:08.403288  624471 pod_ready.go:86] duration metric: took 399.709625ms for pod "kube-proxy-jc5dd" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:01:08.603502  624471 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-832842" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:01:09.002875  624471 pod_ready.go:94] pod "kube-scheduler-no-preload-832842" is "Ready"
	I1217 20:01:09.002905  624471 pod_ready.go:86] duration metric: took 399.378114ms for pod "kube-scheduler-no-preload-832842" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:01:09.002919  624471 pod_ready.go:40] duration metric: took 38.408153316s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1217 20:01:09.051128  624471 start.go:625] kubectl: 1.35.0, cluster: 1.35.0-rc.1 (minor skew: 0)
	I1217 20:01:09.053534  624471 out.go:179] * Done! kubectl is now configured to use "no-preload-832842" cluster and "default" namespace by default
	W1217 20:01:06.072320  625400 pod_ready.go:104] pod "coredns-5dd5756b68-gbhs5" is not "Ready", error: <nil>
	W1217 20:01:08.571546  625400 pod_ready.go:104] pod "coredns-5dd5756b68-gbhs5" is not "Ready", error: <nil>
	I1217 20:01:07.897116  631473 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 20:01:08.397124  631473 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 20:01:08.897399  631473 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 20:01:09.397296  631473 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 20:01:09.897202  631473 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 20:01:10.397310  631473 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 20:01:10.897175  631473 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 20:01:10.975504  631473 kubeadm.go:1114] duration metric: took 4.659591269s to wait for elevateKubeSystemPrivileges
	I1217 20:01:10.975540  631473 kubeadm.go:403] duration metric: took 15.790098497s to StartCluster
	I1217 20:01:10.975558  631473 settings.go:142] acquiring lock: {Name:mk01c60672ff2b8f50b037d6096a0a4590636830 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 20:01:10.975646  631473 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22186-372245/kubeconfig
	I1217 20:01:10.977547  631473 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-372245/kubeconfig: {Name:mkbe8926b9014d2af611aee93b1188b72880b6c1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 20:01:10.977796  631473 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1217 20:01:10.977817  631473 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1217 20:01:10.977867  631473 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1217 20:01:10.978006  631473 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-759234"
	I1217 20:01:10.978029  631473 config.go:182] Loaded profile config "default-k8s-diff-port-759234": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 20:01:10.978054  631473 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-759234"
	I1217 20:01:10.978101  631473 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-759234"
	I1217 20:01:10.978031  631473 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-759234"
	I1217 20:01:10.978248  631473 host.go:66] Checking if "default-k8s-diff-port-759234" exists ...
	I1217 20:01:10.978539  631473 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-759234 --format={{.State.Status}}
	I1217 20:01:10.978747  631473 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-759234 --format={{.State.Status}}
	I1217 20:01:10.979515  631473 out.go:179] * Verifying Kubernetes components...
	I1217 20:01:10.980948  631473 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 20:01:11.004351  631473 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1217 20:01:09.570523  625400 pod_ready.go:94] pod "coredns-5dd5756b68-gbhs5" is "Ready"
	I1217 20:01:09.570551  625400 pod_ready.go:86] duration metric: took 34.005219617s for pod "coredns-5dd5756b68-gbhs5" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:01:09.573051  625400 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-894575" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:01:09.576701  625400 pod_ready.go:94] pod "etcd-old-k8s-version-894575" is "Ready"
	I1217 20:01:09.576725  625400 pod_ready.go:86] duration metric: took 3.651465ms for pod "etcd-old-k8s-version-894575" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:01:09.579243  625400 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-894575" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:01:09.583452  625400 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-894575" is "Ready"
	I1217 20:01:09.583478  625400 pod_ready.go:86] duration metric: took 4.213779ms for pod "kube-apiserver-old-k8s-version-894575" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:01:09.585997  625400 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-894575" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:01:09.768942  625400 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-894575" is "Ready"
	I1217 20:01:09.768977  625400 pod_ready.go:86] duration metric: took 182.957254ms for pod "kube-controller-manager-old-k8s-version-894575" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:01:09.970200  625400 pod_ready.go:83] waiting for pod "kube-proxy-bdzb6" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:01:10.368408  625400 pod_ready.go:94] pod "kube-proxy-bdzb6" is "Ready"
	I1217 20:01:10.368435  625400 pod_ready.go:86] duration metric: took 398.20631ms for pod "kube-proxy-bdzb6" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:01:10.569794  625400 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-894575" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:01:10.969210  625400 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-894575" is "Ready"
	I1217 20:01:10.969252  625400 pod_ready.go:86] duration metric: took 399.426249ms for pod "kube-scheduler-old-k8s-version-894575" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:01:10.969270  625400 pod_ready.go:40] duration metric: took 35.409804659s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1217 20:01:11.041190  625400 start.go:625] kubectl: 1.35.0, cluster: 1.28.0 (minor skew: 7)
	I1217 20:01:11.044208  625400 out.go:203] 
	W1217 20:01:11.045630  625400 out.go:285] ! /usr/local/bin/kubectl is version 1.35.0, which may have incompatibilities with Kubernetes 1.28.0.
	I1217 20:01:11.047652  625400 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1217 20:01:11.049163  625400 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-894575" cluster and "default" namespace by default
	I1217 20:01:11.005141  631473 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-759234"
	I1217 20:01:11.005190  631473 host.go:66] Checking if "default-k8s-diff-port-759234" exists ...
	I1217 20:01:11.005673  631473 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-759234 --format={{.State.Status}}
	I1217 20:01:11.005685  631473 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 20:01:11.005702  631473 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1217 20:01:11.005753  631473 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-759234
	I1217 20:01:11.034589  631473 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33453 SSHKeyPath:/home/jenkins/minikube-integration/22186-372245/.minikube/machines/default-k8s-diff-port-759234/id_rsa Username:docker}
	I1217 20:01:11.037037  631473 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1217 20:01:11.037065  631473 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1217 20:01:11.037212  631473 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-759234
	I1217 20:01:11.065091  631473 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33453 SSHKeyPath:/home/jenkins/minikube-integration/22186-372245/.minikube/machines/default-k8s-diff-port-759234/id_rsa Username:docker}
	I1217 20:01:11.078156  631473 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.94.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1217 20:01:11.158438  631473 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 20:01:11.173742  631473 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 20:01:11.214719  631473 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1217 20:01:11.376291  631473 start.go:977] {"host.minikube.internal": 192.168.94.1} host record injected into CoreDNS's ConfigMap
	I1217 20:01:11.376906  631473 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-759234" to be "Ready" ...
	I1217 20:01:11.616252  631473 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1217 20:01:11.617452  631473 addons.go:530] duration metric: took 639.583404ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1217 20:01:11.880698  631473 kapi.go:214] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-759234" context rescaled to 1 replicas
	I1217 20:01:15.295985  596882 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (10.066827019s)
	W1217 20:01:15.296022  596882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Unable to connect to the server: net/http: TLS handshake timeout
	 output: 
	** stderr ** 
	Unable to connect to the server: net/http: TLS handshake timeout
	
	** /stderr **
	I1217 20:01:15.296032  596882 logs.go:123] Gathering logs for kube-apiserver [6822d1aff73905867cd00c8bd3d996a8d98a37c238f53bab351d576f0d6b34fc] ...
	I1217 20:01:15.296044  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6822d1aff73905867cd00c8bd3d996a8d98a37c238f53bab351d576f0d6b34fc"
	I1217 20:01:15.329910  596882 logs.go:123] Gathering logs for kube-controller-manager [deb0ef3d09cc535bcd10a8ecc98a8afc0243fdcaf4256b36cc91b5d3e2c3810c] ...
	I1217 20:01:15.329943  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 deb0ef3d09cc535bcd10a8ecc98a8afc0243fdcaf4256b36cc91b5d3e2c3810c"
	I1217 20:01:15.361430  596882 logs.go:123] Gathering logs for dmesg ...
	I1217 20:01:15.361465  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 20:01:15.379135  596882 logs.go:123] Gathering logs for kube-apiserver [dfcf129a23a9b4b8338549662d03dc9674e70494089b9acbd56ee28dd0e59a2e] ...
	I1217 20:01:15.379176  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 dfcf129a23a9b4b8338549662d03dc9674e70494089b9acbd56ee28dd0e59a2e"
	I1217 20:01:15.413631  596882 logs.go:123] Gathering logs for kube-scheduler [26afbca819064c614a7c269e4fbe3f73beb12920c9989c7a9adca8a87b8aee29] ...
	I1217 20:01:15.413671  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 26afbca819064c614a7c269e4fbe3f73beb12920c9989c7a9adca8a87b8aee29"
	I1217 20:01:15.444072  596882 logs.go:123] Gathering logs for kube-controller-manager [711081a1b65cc9754b1a9b8fd19fce7769b6a8e65b097e062aa1703f24e1a476] ...
	I1217 20:01:15.444120  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 711081a1b65cc9754b1a9b8fd19fce7769b6a8e65b097e062aa1703f24e1a476"
	I1217 20:01:15.474296  596882 logs.go:123] Gathering logs for CRI-O ...
	I1217 20:01:15.474325  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	W1217 20:01:13.379733  631473 node_ready.go:57] node "default-k8s-diff-port-759234" has "Ready":"False" status (will retry)
	W1217 20:01:15.380677  631473 node_ready.go:57] node "default-k8s-diff-port-759234" has "Ready":"False" status (will retry)
	W1217 20:01:17.382167  631473 node_ready.go:57] node "default-k8s-diff-port-759234" has "Ready":"False" status (will retry)
	I1217 20:01:18.028829  596882 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1217 20:01:19.268145  596882 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": read tcp 192.168.76.1:48746->192.168.76.2:8443: read: connection reset by peer
	I1217 20:01:19.268222  596882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 20:01:19.268292  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 20:01:19.297951  596882 cri.go:89] found id: "dfcf129a23a9b4b8338549662d03dc9674e70494089b9acbd56ee28dd0e59a2e"
	I1217 20:01:19.297972  596882 cri.go:89] found id: "6822d1aff73905867cd00c8bd3d996a8d98a37c238f53bab351d576f0d6b34fc"
	I1217 20:01:19.297976  596882 cri.go:89] found id: ""
	I1217 20:01:19.297984  596882 logs.go:282] 2 containers: [dfcf129a23a9b4b8338549662d03dc9674e70494089b9acbd56ee28dd0e59a2e 6822d1aff73905867cd00c8bd3d996a8d98a37c238f53bab351d576f0d6b34fc]
	I1217 20:01:19.298048  596882 ssh_runner.go:195] Run: which crictl
	I1217 20:01:19.302214  596882 ssh_runner.go:195] Run: which crictl
	I1217 20:01:19.305947  596882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 20:01:19.306014  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 20:01:19.333763  596882 cri.go:89] found id: ""
	I1217 20:01:19.333789  596882 logs.go:282] 0 containers: []
	W1217 20:01:19.333798  596882 logs.go:284] No container was found matching "etcd"
	I1217 20:01:19.333804  596882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 20:01:19.333864  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 20:01:19.362644  596882 cri.go:89] found id: ""
	I1217 20:01:19.362672  596882 logs.go:282] 0 containers: []
	W1217 20:01:19.362682  596882 logs.go:284] No container was found matching "coredns"
	I1217 20:01:19.362687  596882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 20:01:19.362752  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 20:01:19.394030  596882 cri.go:89] found id: "26afbca819064c614a7c269e4fbe3f73beb12920c9989c7a9adca8a87b8aee29"
	I1217 20:01:19.394059  596882 cri.go:89] found id: ""
	I1217 20:01:19.394071  596882 logs.go:282] 1 containers: [26afbca819064c614a7c269e4fbe3f73beb12920c9989c7a9adca8a87b8aee29]
	I1217 20:01:19.394157  596882 ssh_runner.go:195] Run: which crictl
	I1217 20:01:19.398506  596882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 20:01:19.398583  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 20:01:19.425535  596882 cri.go:89] found id: ""
	I1217 20:01:19.425560  596882 logs.go:282] 0 containers: []
	W1217 20:01:19.425569  596882 logs.go:284] No container was found matching "kube-proxy"
	I1217 20:01:19.425575  596882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 20:01:19.425638  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 20:01:19.454704  596882 cri.go:89] found id: "711081a1b65cc9754b1a9b8fd19fce7769b6a8e65b097e062aa1703f24e1a476"
	I1217 20:01:19.454726  596882 cri.go:89] found id: "deb0ef3d09cc535bcd10a8ecc98a8afc0243fdcaf4256b36cc91b5d3e2c3810c"
	I1217 20:01:19.454731  596882 cri.go:89] found id: ""
	I1217 20:01:19.454743  596882 logs.go:282] 2 containers: [711081a1b65cc9754b1a9b8fd19fce7769b6a8e65b097e062aa1703f24e1a476 deb0ef3d09cc535bcd10a8ecc98a8afc0243fdcaf4256b36cc91b5d3e2c3810c]
	I1217 20:01:19.454811  596882 ssh_runner.go:195] Run: which crictl
	I1217 20:01:19.459054  596882 ssh_runner.go:195] Run: which crictl
	I1217 20:01:19.463029  596882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 20:01:19.463111  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 20:01:19.491583  596882 cri.go:89] found id: ""
	I1217 20:01:19.491610  596882 logs.go:282] 0 containers: []
	W1217 20:01:19.491622  596882 logs.go:284] No container was found matching "kindnet"
	I1217 20:01:19.491631  596882 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1217 20:01:19.491688  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 20:01:19.520292  596882 cri.go:89] found id: ""
	I1217 20:01:19.520328  596882 logs.go:282] 0 containers: []
	W1217 20:01:19.520341  596882 logs.go:284] No container was found matching "storage-provisioner"
	I1217 20:01:19.520364  596882 logs.go:123] Gathering logs for kubelet ...
	I1217 20:01:19.520390  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 20:01:19.604632  596882 logs.go:123] Gathering logs for dmesg ...
	I1217 20:01:19.604674  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 20:01:19.621452  596882 logs.go:123] Gathering logs for describe nodes ...
	I1217 20:01:19.621486  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 20:01:19.680554  596882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 20:01:19.680581  596882 logs.go:123] Gathering logs for kube-apiserver [dfcf129a23a9b4b8338549662d03dc9674e70494089b9acbd56ee28dd0e59a2e] ...
	I1217 20:01:19.680597  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 dfcf129a23a9b4b8338549662d03dc9674e70494089b9acbd56ee28dd0e59a2e"
	I1217 20:01:19.712658  596882 logs.go:123] Gathering logs for kube-controller-manager [deb0ef3d09cc535bcd10a8ecc98a8afc0243fdcaf4256b36cc91b5d3e2c3810c] ...
	I1217 20:01:19.712693  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 deb0ef3d09cc535bcd10a8ecc98a8afc0243fdcaf4256b36cc91b5d3e2c3810c"
	I1217 20:01:19.740964  596882 logs.go:123] Gathering logs for container status ...
	I1217 20:01:19.740997  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 20:01:19.773014  596882 logs.go:123] Gathering logs for kube-apiserver [6822d1aff73905867cd00c8bd3d996a8d98a37c238f53bab351d576f0d6b34fc] ...
	I1217 20:01:19.773045  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6822d1aff73905867cd00c8bd3d996a8d98a37c238f53bab351d576f0d6b34fc"
	W1217 20:01:19.802765  596882 logs.go:130] failed kube-apiserver [6822d1aff73905867cd00c8bd3d996a8d98a37c238f53bab351d576f0d6b34fc]: command: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6822d1aff73905867cd00c8bd3d996a8d98a37c238f53bab351d576f0d6b34fc" /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6822d1aff73905867cd00c8bd3d996a8d98a37c238f53bab351d576f0d6b34fc": Process exited with status 1
	stdout:
	
	stderr:
	E1217 20:01:19.800342    5778 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6822d1aff73905867cd00c8bd3d996a8d98a37c238f53bab351d576f0d6b34fc\": container with ID starting with 6822d1aff73905867cd00c8bd3d996a8d98a37c238f53bab351d576f0d6b34fc not found: ID does not exist" containerID="6822d1aff73905867cd00c8bd3d996a8d98a37c238f53bab351d576f0d6b34fc"
	time="2025-12-17T20:01:19Z" level=fatal msg="rpc error: code = NotFound desc = could not find container \"6822d1aff73905867cd00c8bd3d996a8d98a37c238f53bab351d576f0d6b34fc\": container with ID starting with 6822d1aff73905867cd00c8bd3d996a8d98a37c238f53bab351d576f0d6b34fc not found: ID does not exist"
	 output: 
	** stderr ** 
	E1217 20:01:19.800342    5778 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6822d1aff73905867cd00c8bd3d996a8d98a37c238f53bab351d576f0d6b34fc\": container with ID starting with 6822d1aff73905867cd00c8bd3d996a8d98a37c238f53bab351d576f0d6b34fc not found: ID does not exist" containerID="6822d1aff73905867cd00c8bd3d996a8d98a37c238f53bab351d576f0d6b34fc"
	time="2025-12-17T20:01:19Z" level=fatal msg="rpc error: code = NotFound desc = could not find container \"6822d1aff73905867cd00c8bd3d996a8d98a37c238f53bab351d576f0d6b34fc\": container with ID starting with 6822d1aff73905867cd00c8bd3d996a8d98a37c238f53bab351d576f0d6b34fc not found: ID does not exist"
	
	** /stderr **
	I1217 20:01:19.802797  596882 logs.go:123] Gathering logs for kube-scheduler [26afbca819064c614a7c269e4fbe3f73beb12920c9989c7a9adca8a87b8aee29] ...
	I1217 20:01:19.802814  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 26afbca819064c614a7c269e4fbe3f73beb12920c9989c7a9adca8a87b8aee29"
	I1217 20:01:19.830245  596882 logs.go:123] Gathering logs for kube-controller-manager [711081a1b65cc9754b1a9b8fd19fce7769b6a8e65b097e062aa1703f24e1a476] ...
	I1217 20:01:19.830272  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 711081a1b65cc9754b1a9b8fd19fce7769b6a8e65b097e062aa1703f24e1a476"
	I1217 20:01:19.857816  596882 logs.go:123] Gathering logs for CRI-O ...
	I1217 20:01:19.857846  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	W1217 20:01:19.879976  631473 node_ready.go:57] node "default-k8s-diff-port-759234" has "Ready":"False" status (will retry)
	W1217 20:01:21.880734  631473 node_ready.go:57] node "default-k8s-diff-port-759234" has "Ready":"False" status (will retry)
	
	
	==> CRI-O <==
	Dec 17 20:00:49 no-preload-832842 crio[568]: time="2025-12-17T20:00:49.041013202Z" level=info msg="Started container" PID=1764 containerID=55ca2ad24b8a2ee9241203fdd178b54f929582e37041dd86d79b3f677841a5ce description=kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-zjc4j/dashboard-metrics-scraper id=b55325a1-724e-40af-994c-8eb65574bf9b name=/runtime.v1.RuntimeService/StartContainer sandboxID=ec15d0feb093f825afaffc6a197d0ff3ecd9a66fddff8fb31f9437971f51b5ea
	Dec 17 20:00:49 no-preload-832842 crio[568]: time="2025-12-17T20:00:49.091907515Z" level=info msg="Removing container: 6d2c7a993ad05ebd47e395b0a8846c2cd798e6411ba252f85d1948f3688548f5" id=448415fe-e3cf-4d07-894f-b257b64ed1b6 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 17 20:00:49 no-preload-832842 crio[568]: time="2025-12-17T20:00:49.102848057Z" level=info msg="Removed container 6d2c7a993ad05ebd47e395b0a8846c2cd798e6411ba252f85d1948f3688548f5: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-zjc4j/dashboard-metrics-scraper" id=448415fe-e3cf-4d07-894f-b257b64ed1b6 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 17 20:01:01 no-preload-832842 crio[568]: time="2025-12-17T20:01:01.125645695Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=58a5e366-1444-4fe3-8c5f-79eb7b5c47d6 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 20:01:01 no-preload-832842 crio[568]: time="2025-12-17T20:01:01.126676272Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=6a0d64ec-c0e8-4fd8-869f-f7980df347ec name=/runtime.v1.ImageService/ImageStatus
	Dec 17 20:01:01 no-preload-832842 crio[568]: time="2025-12-17T20:01:01.128293717Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=874de56e-6ee5-4be0-96b1-9f47e5f9b362 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 17 20:01:01 no-preload-832842 crio[568]: time="2025-12-17T20:01:01.128443277Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 20:01:01 no-preload-832842 crio[568]: time="2025-12-17T20:01:01.133444688Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 20:01:01 no-preload-832842 crio[568]: time="2025-12-17T20:01:01.133777555Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/3b2c430c59f49687533e237ba1b1610fac136f1fc84542d3996591ad1cc891bb/merged/etc/passwd: no such file or directory"
	Dec 17 20:01:01 no-preload-832842 crio[568]: time="2025-12-17T20:01:01.133884Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/3b2c430c59f49687533e237ba1b1610fac136f1fc84542d3996591ad1cc891bb/merged/etc/group: no such file or directory"
	Dec 17 20:01:01 no-preload-832842 crio[568]: time="2025-12-17T20:01:01.134277142Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 20:01:01 no-preload-832842 crio[568]: time="2025-12-17T20:01:01.167114092Z" level=info msg="Created container d71ed695baa767c1509bc38e05b709bad367861f9b3be89d656fd64d0ea54137: kube-system/storage-provisioner/storage-provisioner" id=874de56e-6ee5-4be0-96b1-9f47e5f9b362 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 17 20:01:01 no-preload-832842 crio[568]: time="2025-12-17T20:01:01.16791213Z" level=info msg="Starting container: d71ed695baa767c1509bc38e05b709bad367861f9b3be89d656fd64d0ea54137" id=76fb00f7-430c-4fe6-a67b-c3e047bff16b name=/runtime.v1.RuntimeService/StartContainer
	Dec 17 20:01:01 no-preload-832842 crio[568]: time="2025-12-17T20:01:01.170225357Z" level=info msg="Started container" PID=1778 containerID=d71ed695baa767c1509bc38e05b709bad367861f9b3be89d656fd64d0ea54137 description=kube-system/storage-provisioner/storage-provisioner id=76fb00f7-430c-4fe6-a67b-c3e047bff16b name=/runtime.v1.RuntimeService/StartContainer sandboxID=37a4519d64a6155074c56e3e7538f11d4ebe789e5a292f8d39c5395c31e6ac10
	Dec 17 20:01:09 no-preload-832842 crio[568]: time="2025-12-17T20:01:09.991979195Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=df20da0d-cefd-4225-a9c3-ea3946f6f4fc name=/runtime.v1.ImageService/ImageStatus
	Dec 17 20:01:09 no-preload-832842 crio[568]: time="2025-12-17T20:01:09.99299555Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=dce755d2-6966-47d2-807f-ff2c0a705b6d name=/runtime.v1.ImageService/ImageStatus
	Dec 17 20:01:09 no-preload-832842 crio[568]: time="2025-12-17T20:01:09.994192288Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-zjc4j/dashboard-metrics-scraper" id=99c6f88b-2e9f-4001-acee-b5b8ecf09875 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 17 20:01:09 no-preload-832842 crio[568]: time="2025-12-17T20:01:09.994362777Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 20:01:09 no-preload-832842 crio[568]: time="2025-12-17T20:01:09.99983872Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 20:01:10 no-preload-832842 crio[568]: time="2025-12-17T20:01:10.000367243Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 20:01:10 no-preload-832842 crio[568]: time="2025-12-17T20:01:10.031327228Z" level=info msg="Created container c35ae1f5685d7eb989e5e2ae71d012fc2d94fb19e3073568b71a6676af20d337: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-zjc4j/dashboard-metrics-scraper" id=99c6f88b-2e9f-4001-acee-b5b8ecf09875 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 17 20:01:10 no-preload-832842 crio[568]: time="2025-12-17T20:01:10.032053696Z" level=info msg="Starting container: c35ae1f5685d7eb989e5e2ae71d012fc2d94fb19e3073568b71a6676af20d337" id=9eb3ed70-aa91-451e-b497-4502bf6db091 name=/runtime.v1.RuntimeService/StartContainer
	Dec 17 20:01:10 no-preload-832842 crio[568]: time="2025-12-17T20:01:10.034285823Z" level=info msg="Started container" PID=1811 containerID=c35ae1f5685d7eb989e5e2ae71d012fc2d94fb19e3073568b71a6676af20d337 description=kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-zjc4j/dashboard-metrics-scraper id=9eb3ed70-aa91-451e-b497-4502bf6db091 name=/runtime.v1.RuntimeService/StartContainer sandboxID=ec15d0feb093f825afaffc6a197d0ff3ecd9a66fddff8fb31f9437971f51b5ea
	Dec 17 20:01:10 no-preload-832842 crio[568]: time="2025-12-17T20:01:10.152215775Z" level=info msg="Removing container: 55ca2ad24b8a2ee9241203fdd178b54f929582e37041dd86d79b3f677841a5ce" id=86d70372-04f7-4453-92e6-ddeeaee7c600 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 17 20:01:10 no-preload-832842 crio[568]: time="2025-12-17T20:01:10.161528432Z" level=info msg="Removed container 55ca2ad24b8a2ee9241203fdd178b54f929582e37041dd86d79b3f677841a5ce: kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-zjc4j/dashboard-metrics-scraper" id=86d70372-04f7-4453-92e6-ddeeaee7c600 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	c35ae1f5685d7       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           16 seconds ago      Exited              dashboard-metrics-scraper   3                   ec15d0feb093f       dashboard-metrics-scraper-867fb5f87b-zjc4j   kubernetes-dashboard
	d71ed695baa76       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           25 seconds ago      Running             storage-provisioner         1                   37a4519d64a61       storage-provisioner                          kube-system
	55c1a97eef28c       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   46 seconds ago      Running             kubernetes-dashboard        0                   131175fcfc0fa       kubernetes-dashboard-b84665fb8-cfd69         kubernetes-dashboard
	df79f3414f094       aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139                                           56 seconds ago      Running             coredns                     0                   b4799098ac67e       coredns-7d764666f9-988jw                     kube-system
	28ed811767308       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           56 seconds ago      Running             busybox                     1                   c45848f548378       busybox                                      default
	74a2be0dba394       af0321f3a4f388cfb978464739c323ebf891a7b0b50cdfd7179e92f141dad42a                                           56 seconds ago      Running             kube-proxy                  0                   71ca062f3ed9e       kube-proxy-jc5dd                             kube-system
	574a5ed645344       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           56 seconds ago      Exited              storage-provisioner         0                   37a4519d64a61       storage-provisioner                          kube-system
	6dc1bf580a5e5       4921d7a6dffa922dd679732ba4797085c4f39e9a53bee8b6fdb1d463e8571251                                           56 seconds ago      Running             kindnet-cni                 0                   9eaa9c854caa7       kindnet-t5x5v                                kube-system
	aa0f70514b3b3       5032a56602e1b9bd8856699701b6148aa1b9901d05b61f893df3b57f84aca614                                           59 seconds ago      Running             kube-controller-manager     0                   670885867d7cc       kube-controller-manager-no-preload-832842    kube-system
	3c8014a76c7ed       0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2                                           59 seconds ago      Running             etcd                        0                   adea169e1e2a8       etcd-no-preload-832842                       kube-system
	93adc4b861b7c       73f80cdc073daa4d501207f9e6dec1fa9eea5f27e8d347b8a0c4bad8811eecdc                                           59 seconds ago      Running             kube-scheduler              0                   1fc9e8de07a77       kube-scheduler-no-preload-832842             kube-system
	fc98dcbd3e923       58865405a13bccac1d74bc3f446dddd22e6ef0d7ee8b52363c86dd31838976ce                                           59 seconds ago      Running             kube-apiserver              0                   5debfc43044b1       kube-apiserver-no-preload-832842             kube-system
	
	
	==> coredns [df79f3414f09421efcb91bbc4abcc73e07bf62fc320f79ed6c541180aa4945ab] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 66f0a748f44f6317a6b122af3f457c9dd0ecaed8718ffbf95a69434523efd9ec4992e71f54c7edd5753646fe9af89ac2138b9c3ce14d4a0ba9d2372a55f120bb
	CoreDNS-1.13.1
	linux/amd64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:32939 - 62348 "HINFO IN 6765570193243579541.1179140327450952908. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.42761068s
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	
	
	==> describe nodes <==
	Name:               no-preload-832842
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-832842
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2e96f676eb7e96389e85fe0658a4ede4c4ba6924
	                    minikube.k8s.io/name=no-preload-832842
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_17T19_59_33_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Dec 2025 19:59:29 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-832842
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Dec 2025 20:01:20 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Dec 2025 20:00:59 +0000   Wed, 17 Dec 2025 19:59:28 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Dec 2025 20:00:59 +0000   Wed, 17 Dec 2025 19:59:28 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Dec 2025 20:00:59 +0000   Wed, 17 Dec 2025 19:59:28 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Dec 2025 20:00:59 +0000   Wed, 17 Dec 2025 19:59:50 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    no-preload-832842
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 99cc213c06a11cdf07b2a4d26942818a
	  System UUID:                e81b3478-a278-4914-8840-ea9b4f5123a7
	  Boot ID:                    832664c8-407a-4bff-a432-3bbc3f20421e
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.35.0-rc.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         93s
	  kube-system                 coredns-7d764666f9-988jw                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     109s
	  kube-system                 etcd-no-preload-832842                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         114s
	  kube-system                 kindnet-t5x5v                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      109s
	  kube-system                 kube-apiserver-no-preload-832842              250m (3%)     0 (0%)      0 (0%)           0 (0%)         116s
	  kube-system                 kube-controller-manager-no-preload-832842     200m (2%)     0 (0%)      0 (0%)           0 (0%)         114s
	  kube-system                 kube-proxy-jc5dd                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         109s
	  kube-system                 kube-scheduler-no-preload-832842              100m (1%)     0 (0%)      0 (0%)           0 (0%)         114s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         108s
	  kubernetes-dashboard        dashboard-metrics-scraper-867fb5f87b-zjc4j    0 (0%)        0 (0%)      0 (0%)           0 (0%)         54s
	  kubernetes-dashboard        kubernetes-dashboard-b84665fb8-cfd69          0 (0%)        0 (0%)      0 (0%)           0 (0%)         54s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  110s  node-controller  Node no-preload-832842 event: Registered Node no-preload-832842 in Controller
	  Normal  RegisteredNode  54s   node-controller  Node no-preload-832842 event: Registered Node no-preload-832842 in Controller
	
	
	==> dmesg <==
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 02 bf cf fd 8a f3 08 06
	[  +0.000372] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 46 d7 50 f9 50 96 08 06
	[Dec17 19:26] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000011] ll header: 00000000: 12 b8 6e 1b fb 93 de a2 46 23 bd 1e 08 00
	[  +1.015318] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 12 b8 6e 1b fb 93 de a2 46 23 bd 1e 08 00
	[  +1.023837] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 12 b8 6e 1b fb 93 de a2 46 23 bd 1e 08 00
	[  +1.023872] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 12 b8 6e 1b fb 93 de a2 46 23 bd 1e 08 00
	[  +1.023881] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 12 b8 6e 1b fb 93 de a2 46 23 bd 1e 08 00
	[  +1.023899] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 12 b8 6e 1b fb 93 de a2 46 23 bd 1e 08 00
	[  +2.047807] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: 12 b8 6e 1b fb 93 de a2 46 23 bd 1e 08 00
	[  +4.031540] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000022] ll header: 00000000: 12 b8 6e 1b fb 93 de a2 46 23 bd 1e 08 00
	[  +8.319118] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: 12 b8 6e 1b fb 93 de a2 46 23 bd 1e 08 00
	[ +16.382218] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 12 b8 6e 1b fb 93 de a2 46 23 bd 1e 08 00
	[Dec17 19:27] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 12 b8 6e 1b fb 93 de a2 46 23 bd 1e 08 00
	
	
	==> etcd [3c8014a76c7ede91c3cd5009249d11a432295b5b5abd84d90df0cea58173d3dd] <==
	{"level":"info","ts":"2025-12-17T20:00:27.555006Z","caller":"fileutil/purge.go:49","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-12-17T20:00:27.555311Z","caller":"fileutil/purge.go:49","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-12-17T20:00:27.555173Z","caller":"embed/etcd.go:292","msg":"now serving peer/client/metrics","local-member-id":"f23060b075c4c089","initial-advertise-peer-urls":["https://192.168.103.2:2380"],"listen-peer-urls":["https://192.168.103.2:2380"],"advertise-client-urls":["https://192.168.103.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.103.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-12-17T20:00:27.555215Z","caller":"embed/etcd.go:890","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-12-17T20:00:27.555524Z","caller":"etcdserver/server.go:483","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"f23060b075c4c089","forward-ticks":9,"forward-duration":"900ms","election-ticks":10,"election-timeout":"1s"}
	{"level":"info","ts":"2025-12-17T20:00:27.555668Z","caller":"membership/cluster.go:433","msg":"ignore already added member","cluster-id":"3336683c081d149d","local-member-id":"f23060b075c4c089","added-peer-id":"f23060b075c4c089","added-peer-peer-urls":["https://192.168.103.2:2380"],"added-peer-is-learner":false}
	{"level":"info","ts":"2025-12-17T20:00:27.555870Z","caller":"membership/cluster.go:674","msg":"updated cluster version","cluster-id":"3336683c081d149d","local-member-id":"f23060b075c4c089","from":"3.6","to":"3.6"}
	{"level":"info","ts":"2025-12-17T20:00:27.944308Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"f23060b075c4c089 is starting a new election at term 2"}
	{"level":"info","ts":"2025-12-17T20:00:27.944356Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"f23060b075c4c089 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-12-17T20:00:27.944429Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"f23060b075c4c089 received MsgPreVoteResp from f23060b075c4c089 at term 2"}
	{"level":"info","ts":"2025-12-17T20:00:27.944442Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"f23060b075c4c089 has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-17T20:00:27.944457Z","logger":"raft","caller":"v3@v3.6.0/raft.go:912","msg":"f23060b075c4c089 became candidate at term 3"}
	{"level":"info","ts":"2025-12-17T20:00:27.945223Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"f23060b075c4c089 received MsgVoteResp from f23060b075c4c089 at term 3"}
	{"level":"info","ts":"2025-12-17T20:00:27.945252Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"f23060b075c4c089 has received 1 MsgVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-17T20:00:27.945283Z","logger":"raft","caller":"v3@v3.6.0/raft.go:970","msg":"f23060b075c4c089 became leader at term 3"}
	{"level":"info","ts":"2025-12-17T20:00:27.945294Z","logger":"raft","caller":"v3@v3.6.0/node.go:370","msg":"raft.node: f23060b075c4c089 elected leader f23060b075c4c089 at term 3"}
	{"level":"info","ts":"2025-12-17T20:00:27.947737Z","caller":"etcdserver/server.go:1820","msg":"published local member to cluster through raft","local-member-id":"f23060b075c4c089","local-member-attributes":"{Name:no-preload-832842 ClientURLs:[https://192.168.103.2:2379]}","cluster-id":"3336683c081d149d","publish-timeout":"7s"}
	{"level":"info","ts":"2025-12-17T20:00:27.947779Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-17T20:00:27.947851Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-17T20:00:27.948203Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-12-17T20:00:27.948330Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-12-17T20:00:27.949267Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-17T20:00:27.949317Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-17T20:00:27.954124Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.103.2:2379"}
	{"level":"info","ts":"2025-12-17T20:00:27.954305Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 20:01:26 up  1:43,  0 user,  load average: 3.48, 3.24, 2.33
	Linux no-preload-832842 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [6dc1bf580a5e5d88fdf2f6bbe5d1905fb56db30030d094660f124897fd457658] <==
	I1217 20:00:30.539410       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1217 20:00:30.594507       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1217 20:00:30.594693       1 main.go:148] setting mtu 1500 for CNI 
	I1217 20:00:30.594721       1 main.go:178] kindnetd IP family: "ipv4"
	I1217 20:00:30.594750       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-17T20:00:30Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1217 20:00:30.794979       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1217 20:00:30.795168       1 controller.go:381] "Waiting for informer caches to sync"
	I1217 20:00:30.795187       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1217 20:00:30.937403       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1217 20:00:31.195591       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1217 20:00:31.195625       1 metrics.go:72] Registering metrics
	I1217 20:00:31.195692       1 controller.go:711] "Syncing nftables rules"
	I1217 20:00:40.795164       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1217 20:00:40.795228       1 main.go:301] handling current node
	I1217 20:00:50.795193       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1217 20:00:50.795240       1 main.go:301] handling current node
	I1217 20:01:00.795186       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1217 20:01:00.795220       1 main.go:301] handling current node
	I1217 20:01:10.795302       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1217 20:01:10.795365       1 main.go:301] handling current node
	I1217 20:01:20.797912       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1217 20:01:20.797952       1 main.go:301] handling current node
	
	
	==> kube-apiserver [fc98dcbd3e923feb9befb5e08f3923050cddcdcd6ec0dde8a4a828548f21afbc] <==
	I1217 20:00:29.028155       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1217 20:00:29.028251       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1217 20:00:29.028255       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1217 20:00:29.028238       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1217 20:00:29.031354       1 shared_informer.go:377] "Caches are synced"
	I1217 20:00:29.031412       1 shared_informer.go:377] "Caches are synced"
	I1217 20:00:29.031447       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1217 20:00:29.031617       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1217 20:00:29.033942       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	E1217 20:00:29.040686       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1217 20:00:29.076871       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1217 20:00:29.082167       1 shared_informer.go:377] "Caches are synced"
	I1217 20:00:29.082198       1 policy_source.go:248] refreshing policies
	I1217 20:00:29.097621       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1217 20:00:29.315764       1 controller.go:667] quota admission added evaluator for: namespaces
	I1217 20:00:29.344221       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1217 20:00:29.367629       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1217 20:00:29.375216       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1217 20:00:29.382717       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1217 20:00:29.421615       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.98.64.7"}
	I1217 20:00:29.435712       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.110.111.16"}
	I1217 20:00:29.931340       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I1217 20:00:32.626039       1 controller.go:667] quota admission added evaluator for: endpoints
	I1217 20:00:32.677494       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1217 20:00:32.826471       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [aa0f70514b3b3987679fa08562d6a29d0cde6f41668ff6920603c0af90405bbe] <==
	I1217 20:00:32.192095       1 shared_informer.go:377] "Caches are synced"
	I1217 20:00:32.192072       1 node_lifecycle_controller.go:1080] "Controller detected that zone is now in new state" zone="" newState="Normal"
	I1217 20:00:32.192195       1 shared_informer.go:377] "Caches are synced"
	I1217 20:00:32.192619       1 shared_informer.go:377] "Caches are synced"
	I1217 20:00:32.192721       1 shared_informer.go:377] "Caches are synced"
	I1217 20:00:32.192909       1 shared_informer.go:377] "Caches are synced"
	I1217 20:00:32.193363       1 shared_informer.go:377] "Caches are synced"
	I1217 20:00:32.194346       1 shared_informer.go:377] "Caches are synced"
	I1217 20:00:32.198859       1 shared_informer.go:377] "Caches are synced"
	I1217 20:00:32.202967       1 shared_informer.go:377] "Caches are synced"
	I1217 20:00:32.203030       1 shared_informer.go:377] "Caches are synced"
	I1217 20:00:32.203057       1 shared_informer.go:377] "Caches are synced"
	I1217 20:00:32.206489       1 shared_informer.go:377] "Caches are synced"
	I1217 20:00:32.211378       1 shared_informer.go:377] "Caches are synced"
	I1217 20:00:32.211401       1 shared_informer.go:377] "Caches are synced"
	I1217 20:00:32.211469       1 shared_informer.go:377] "Caches are synced"
	I1217 20:00:32.212607       1 shared_informer.go:377] "Caches are synced"
	I1217 20:00:32.212627       1 shared_informer.go:377] "Caches are synced"
	I1217 20:00:32.212654       1 shared_informer.go:377] "Caches are synced"
	I1217 20:00:32.215772       1 shared_informer.go:377] "Caches are synced"
	I1217 20:00:32.227342       1 shared_informer.go:377] "Caches are synced"
	I1217 20:00:32.289938       1 shared_informer.go:377] "Caches are synced"
	I1217 20:00:32.294485       1 shared_informer.go:377] "Caches are synced"
	I1217 20:00:32.294529       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1217 20:00:32.294537       1 garbagecollector.go:169] "Proceeding to collect garbage"
	
	
	==> kube-proxy [74a2be0dba394331147af1f7139cc8715764693116a735ed916bd4c8ee2fd3bf] <==
	I1217 20:00:30.394049       1 server_linux.go:53] "Using iptables proxy"
	I1217 20:00:30.471197       1 shared_informer.go:370] "Waiting for caches to sync"
	I1217 20:00:30.571711       1 shared_informer.go:377] "Caches are synced"
	I1217 20:00:30.571753       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.103.2"]
	E1217 20:00:30.571964       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1217 20:00:30.593382       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1217 20:00:30.593446       1 server_linux.go:136] "Using iptables Proxier"
	I1217 20:00:30.599896       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1217 20:00:30.600485       1 server.go:529] "Version info" version="v1.35.0-rc.1"
	I1217 20:00:30.600564       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1217 20:00:30.602134       1 config.go:200] "Starting service config controller"
	I1217 20:00:30.602164       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1217 20:00:30.602217       1 config.go:403] "Starting serviceCIDR config controller"
	I1217 20:00:30.602238       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1217 20:00:30.602282       1 config.go:106] "Starting endpoint slice config controller"
	I1217 20:00:30.602315       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1217 20:00:30.602325       1 config.go:309] "Starting node config controller"
	I1217 20:00:30.602362       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1217 20:00:30.602372       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1217 20:00:30.702278       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1217 20:00:30.702341       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1217 20:00:30.702449       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [93adc4b861b7c2cb084b258ba073a7308743dab281018c38f60ca99fa8a8c8eb] <==
	I1217 20:00:27.674048       1 serving.go:386] Generated self-signed cert in-memory
	W1217 20:00:28.973499       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1217 20:00:28.973645       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1217 20:00:28.973668       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1217 20:00:28.973696       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1217 20:00:29.017329       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0-rc.1"
	I1217 20:00:29.017377       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1217 20:00:29.020516       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1217 20:00:29.020564       1 shared_informer.go:370] "Waiting for caches to sync"
	I1217 20:00:29.020696       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1217 20:00:29.020724       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1217 20:00:29.122491       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Dec 17 20:00:44 no-preload-832842 kubelet[723]: E1217 20:00:44.074404     723 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-no-preload-832842" containerName="kube-apiserver"
	Dec 17 20:00:48 no-preload-832842 kubelet[723]: E1217 20:00:48.991175     723 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-zjc4j" containerName="dashboard-metrics-scraper"
	Dec 17 20:00:48 no-preload-832842 kubelet[723]: I1217 20:00:48.991230     723 scope.go:122] "RemoveContainer" containerID="6d2c7a993ad05ebd47e395b0a8846c2cd798e6411ba252f85d1948f3688548f5"
	Dec 17 20:00:49 no-preload-832842 kubelet[723]: I1217 20:00:49.089951     723 scope.go:122] "RemoveContainer" containerID="6d2c7a993ad05ebd47e395b0a8846c2cd798e6411ba252f85d1948f3688548f5"
	Dec 17 20:00:49 no-preload-832842 kubelet[723]: E1217 20:00:49.090263     723 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-zjc4j" containerName="dashboard-metrics-scraper"
	Dec 17 20:00:49 no-preload-832842 kubelet[723]: I1217 20:00:49.090304     723 scope.go:122] "RemoveContainer" containerID="55ca2ad24b8a2ee9241203fdd178b54f929582e37041dd86d79b3f677841a5ce"
	Dec 17 20:00:49 no-preload-832842 kubelet[723]: E1217 20:00:49.090486     723 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-zjc4j_kubernetes-dashboard(da73ea11-bc61-43cc-9a72-f9172ec75207)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-zjc4j" podUID="da73ea11-bc61-43cc-9a72-f9172ec75207"
	Dec 17 20:00:51 no-preload-832842 kubelet[723]: E1217 20:00:51.830902     723 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-zjc4j" containerName="dashboard-metrics-scraper"
	Dec 17 20:00:51 no-preload-832842 kubelet[723]: I1217 20:00:51.830946     723 scope.go:122] "RemoveContainer" containerID="55ca2ad24b8a2ee9241203fdd178b54f929582e37041dd86d79b3f677841a5ce"
	Dec 17 20:00:51 no-preload-832842 kubelet[723]: E1217 20:00:51.831146     723 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-zjc4j_kubernetes-dashboard(da73ea11-bc61-43cc-9a72-f9172ec75207)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-zjc4j" podUID="da73ea11-bc61-43cc-9a72-f9172ec75207"
	Dec 17 20:01:01 no-preload-832842 kubelet[723]: I1217 20:01:01.125188     723 scope.go:122] "RemoveContainer" containerID="574a5ed6453441e6d8a97097093213b4144a910e98bd02d4b28191ce5e459144"
	Dec 17 20:01:07 no-preload-832842 kubelet[723]: E1217 20:01:07.538402     723 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-988jw" containerName="coredns"
	Dec 17 20:01:09 no-preload-832842 kubelet[723]: E1217 20:01:09.991490     723 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-zjc4j" containerName="dashboard-metrics-scraper"
	Dec 17 20:01:09 no-preload-832842 kubelet[723]: I1217 20:01:09.991525     723 scope.go:122] "RemoveContainer" containerID="55ca2ad24b8a2ee9241203fdd178b54f929582e37041dd86d79b3f677841a5ce"
	Dec 17 20:01:10 no-preload-832842 kubelet[723]: I1217 20:01:10.150909     723 scope.go:122] "RemoveContainer" containerID="55ca2ad24b8a2ee9241203fdd178b54f929582e37041dd86d79b3f677841a5ce"
	Dec 17 20:01:10 no-preload-832842 kubelet[723]: E1217 20:01:10.151207     723 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-zjc4j" containerName="dashboard-metrics-scraper"
	Dec 17 20:01:10 no-preload-832842 kubelet[723]: I1217 20:01:10.151240     723 scope.go:122] "RemoveContainer" containerID="c35ae1f5685d7eb989e5e2ae71d012fc2d94fb19e3073568b71a6676af20d337"
	Dec 17 20:01:10 no-preload-832842 kubelet[723]: E1217 20:01:10.151431     723 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-zjc4j_kubernetes-dashboard(da73ea11-bc61-43cc-9a72-f9172ec75207)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-zjc4j" podUID="da73ea11-bc61-43cc-9a72-f9172ec75207"
	Dec 17 20:01:11 no-preload-832842 kubelet[723]: E1217 20:01:11.831828     723 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-zjc4j" containerName="dashboard-metrics-scraper"
	Dec 17 20:01:11 no-preload-832842 kubelet[723]: I1217 20:01:11.831876     723 scope.go:122] "RemoveContainer" containerID="c35ae1f5685d7eb989e5e2ae71d012fc2d94fb19e3073568b71a6676af20d337"
	Dec 17 20:01:11 no-preload-832842 kubelet[723]: E1217 20:01:11.832123     723 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-867fb5f87b-zjc4j_kubernetes-dashboard(da73ea11-bc61-43cc-9a72-f9172ec75207)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-867fb5f87b-zjc4j" podUID="da73ea11-bc61-43cc-9a72-f9172ec75207"
	Dec 17 20:01:21 no-preload-832842 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 17 20:01:21 no-preload-832842 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 17 20:01:21 no-preload-832842 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 20:01:21 no-preload-832842 systemd[1]: kubelet.service: Consumed 1.816s CPU time.
	
	
	==> kubernetes-dashboard [55c1a97eef28cd0406e0d4aef3df5a460e2bc3114b4471c21d47e187a026216d] <==
	2025/12/17 20:00:40 Starting overwatch
	2025/12/17 20:00:40 Using namespace: kubernetes-dashboard
	2025/12/17 20:00:40 Using in-cluster config to connect to apiserver
	2025/12/17 20:00:40 Using secret token for csrf signing
	2025/12/17 20:00:40 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/12/17 20:00:40 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/12/17 20:00:40 Successful initial request to the apiserver, version: v1.35.0-rc.1
	2025/12/17 20:00:40 Generating JWE encryption key
	2025/12/17 20:00:40 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/12/17 20:00:40 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/12/17 20:00:40 Initializing JWE encryption key from synchronized object
	2025/12/17 20:00:40 Creating in-cluster Sidecar client
	2025/12/17 20:00:40 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/17 20:00:40 Serving insecurely on HTTP port: 9090
	2025/12/17 20:01:10 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [574a5ed6453441e6d8a97097093213b4144a910e98bd02d4b28191ce5e459144] <==
	I1217 20:00:30.354687       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1217 20:01:00.359543       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [d71ed695baa767c1509bc38e05b709bad367861f9b3be89d656fd64d0ea54137] <==
	I1217 20:01:01.184158       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1217 20:01:01.192401       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1217 20:01:01.192453       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1217 20:01:01.195518       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 20:01:04.650919       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 20:01:08.912149       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 20:01:12.511335       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 20:01:15.565355       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 20:01:18.587726       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 20:01:18.592870       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1217 20:01:18.593070       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1217 20:01:18.593175       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"dc57e620-3a27-4c8c-a77e-e1c5cd6ef8f6", APIVersion:"v1", ResourceVersion:"680", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-832842_850ea18b-545c-49a8-9739-189a6fa3e3bd became leader
	I1217 20:01:18.593312       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-832842_850ea18b-545c-49a8-9739-189a6fa3e3bd!
	W1217 20:01:18.595843       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 20:01:18.599149       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1217 20:01:18.693862       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-832842_850ea18b-545c-49a8-9739-189a6fa3e3bd!
	W1217 20:01:20.602886       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 20:01:20.607496       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 20:01:22.611293       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 20:01:22.615407       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 20:01:24.618358       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 20:01:24.622877       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 20:01:26.626464       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 20:01:26.630683       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-832842 -n no-preload-832842
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-832842 -n no-preload-832842: exit status 2 (380.113522ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context no-preload-832842 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/no-preload/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/no-preload/serial/Pause (6.76s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (6.91s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-894575 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p old-k8s-version-894575 --alsologtostderr -v=1: exit status 80 (2.598666083s)

                                                
                                                
-- stdout --
	* Pausing node old-k8s-version-894575 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1217 20:01:22.966222  636903 out.go:360] Setting OutFile to fd 1 ...
	I1217 20:01:22.966342  636903 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 20:01:22.966352  636903 out.go:374] Setting ErrFile to fd 2...
	I1217 20:01:22.966359  636903 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 20:01:22.966657  636903 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22186-372245/.minikube/bin
	I1217 20:01:22.966947  636903 out.go:368] Setting JSON to false
	I1217 20:01:22.966978  636903 mustload.go:66] Loading cluster: old-k8s-version-894575
	I1217 20:01:22.967459  636903 config.go:182] Loaded profile config "old-k8s-version-894575": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1217 20:01:22.968070  636903 cli_runner.go:164] Run: docker container inspect old-k8s-version-894575 --format={{.State.Status}}
	I1217 20:01:22.988473  636903 host.go:66] Checking if "old-k8s-version-894575" exists ...
	I1217 20:01:22.988824  636903 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 20:01:23.057700  636903 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:78 OomKillDisable:false NGoroutines:84 SystemTime:2025-12-17 20:01:23.047062847 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1217 20:01:23.058563  636903 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/22186/minikube-v1.37.0-1765965980-22186-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1765965980-22186/minikube-v1.37.0-1765965980-22186-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1765965980-22186-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:old-k8s-version-894575 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=
true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1217 20:01:23.060394  636903 out.go:179] * Pausing node old-k8s-version-894575 ... 
	I1217 20:01:23.061572  636903 host.go:66] Checking if "old-k8s-version-894575" exists ...
	I1217 20:01:23.061943  636903 ssh_runner.go:195] Run: systemctl --version
	I1217 20:01:23.061996  636903 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-894575
	I1217 20:01:23.090703  636903 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33448 SSHKeyPath:/home/jenkins/minikube-integration/22186-372245/.minikube/machines/old-k8s-version-894575/id_rsa Username:docker}
	I1217 20:01:23.200897  636903 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 20:01:23.213912  636903 pause.go:52] kubelet running: true
	I1217 20:01:23.213989  636903 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1217 20:01:23.402658  636903 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1217 20:01:23.402766  636903 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1217 20:01:23.499794  636903 cri.go:89] found id: "464015c6e96083c6df4b19581746c43903d1b30015e9e8e6a22182712cc3e2da"
	I1217 20:01:23.499829  636903 cri.go:89] found id: "ab6e1c127ed17a202d26f0686d15d1e8d81c83b2f3e4ee38703fa2ce3aee6ce2"
	I1217 20:01:23.499835  636903 cri.go:89] found id: "780e65a762a1065439990615f358a4208007b4713894463341d9a2f8f9b91b33"
	I1217 20:01:23.499840  636903 cri.go:89] found id: "71ddc80929603be65503dc71e856358367024bf67d78ffb6c1371882b159eff9"
	I1217 20:01:23.499845  636903 cri.go:89] found id: "3f0565e2bdcd725f2a285b6794d9cb087b195ddb248255a1410193df892996c7"
	I1217 20:01:23.499850  636903 cri.go:89] found id: "484a1e94925a1a7ea27bb0e8881ce92d0ba724ee5dc0be0b55aa22d4968fb0f9"
	I1217 20:01:23.499855  636903 cri.go:89] found id: "71cce81b2a47a327a9532ef2473382c328c9042db27d9361ba053cc1855855f4"
	I1217 20:01:23.499859  636903 cri.go:89] found id: "467ab50d14f76d9794b7546e57cbb0eec5d9291e092f5be7dae85296a7ea1b59"
	I1217 20:01:23.499863  636903 cri.go:89] found id: "80c6fccb8bdf5504ced354de5e08d38c6385613976d63820be5bf2822f675a3d"
	I1217 20:01:23.499871  636903 cri.go:89] found id: "294d1768cc9371cf9e11f88d1708895d4e38b481f60bc8fc77e44ab1fb18b5ff"
	I1217 20:01:23.499876  636903 cri.go:89] found id: "75a986f0ae8c399acd6a7e6fb4b4edd21dd8ecafde18a0e3734080cd5e518d63"
	I1217 20:01:23.499880  636903 cri.go:89] found id: ""
	I1217 20:01:23.499931  636903 ssh_runner.go:195] Run: sudo runc list -f json
	I1217 20:01:23.516235  636903 retry.go:31] will retry after 285.345744ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T20:01:23Z" level=error msg="open /run/runc: no such file or directory"
	I1217 20:01:23.801728  636903 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 20:01:23.815488  636903 pause.go:52] kubelet running: false
	I1217 20:01:23.815564  636903 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1217 20:01:23.979176  636903 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1217 20:01:23.979284  636903 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1217 20:01:24.052998  636903 cri.go:89] found id: "464015c6e96083c6df4b19581746c43903d1b30015e9e8e6a22182712cc3e2da"
	I1217 20:01:24.053023  636903 cri.go:89] found id: "ab6e1c127ed17a202d26f0686d15d1e8d81c83b2f3e4ee38703fa2ce3aee6ce2"
	I1217 20:01:24.053030  636903 cri.go:89] found id: "780e65a762a1065439990615f358a4208007b4713894463341d9a2f8f9b91b33"
	I1217 20:01:24.053035  636903 cri.go:89] found id: "71ddc80929603be65503dc71e856358367024bf67d78ffb6c1371882b159eff9"
	I1217 20:01:24.053039  636903 cri.go:89] found id: "3f0565e2bdcd725f2a285b6794d9cb087b195ddb248255a1410193df892996c7"
	I1217 20:01:24.053044  636903 cri.go:89] found id: "484a1e94925a1a7ea27bb0e8881ce92d0ba724ee5dc0be0b55aa22d4968fb0f9"
	I1217 20:01:24.053048  636903 cri.go:89] found id: "71cce81b2a47a327a9532ef2473382c328c9042db27d9361ba053cc1855855f4"
	I1217 20:01:24.053052  636903 cri.go:89] found id: "467ab50d14f76d9794b7546e57cbb0eec5d9291e092f5be7dae85296a7ea1b59"
	I1217 20:01:24.053056  636903 cri.go:89] found id: "80c6fccb8bdf5504ced354de5e08d38c6385613976d63820be5bf2822f675a3d"
	I1217 20:01:24.053069  636903 cri.go:89] found id: "294d1768cc9371cf9e11f88d1708895d4e38b481f60bc8fc77e44ab1fb18b5ff"
	I1217 20:01:24.053088  636903 cri.go:89] found id: "75a986f0ae8c399acd6a7e6fb4b4edd21dd8ecafde18a0e3734080cd5e518d63"
	I1217 20:01:24.053094  636903 cri.go:89] found id: ""
	I1217 20:01:24.053150  636903 ssh_runner.go:195] Run: sudo runc list -f json
	I1217 20:01:24.065451  636903 retry.go:31] will retry after 212.205882ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T20:01:24Z" level=error msg="open /run/runc: no such file or directory"
	I1217 20:01:24.277870  636903 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 20:01:24.291026  636903 pause.go:52] kubelet running: false
	I1217 20:01:24.291109  636903 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1217 20:01:24.462542  636903 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1217 20:01:24.462611  636903 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1217 20:01:24.540308  636903 cri.go:89] found id: "464015c6e96083c6df4b19581746c43903d1b30015e9e8e6a22182712cc3e2da"
	I1217 20:01:24.540333  636903 cri.go:89] found id: "ab6e1c127ed17a202d26f0686d15d1e8d81c83b2f3e4ee38703fa2ce3aee6ce2"
	I1217 20:01:24.540338  636903 cri.go:89] found id: "780e65a762a1065439990615f358a4208007b4713894463341d9a2f8f9b91b33"
	I1217 20:01:24.540341  636903 cri.go:89] found id: "71ddc80929603be65503dc71e856358367024bf67d78ffb6c1371882b159eff9"
	I1217 20:01:24.540344  636903 cri.go:89] found id: "3f0565e2bdcd725f2a285b6794d9cb087b195ddb248255a1410193df892996c7"
	I1217 20:01:24.540348  636903 cri.go:89] found id: "484a1e94925a1a7ea27bb0e8881ce92d0ba724ee5dc0be0b55aa22d4968fb0f9"
	I1217 20:01:24.540351  636903 cri.go:89] found id: "71cce81b2a47a327a9532ef2473382c328c9042db27d9361ba053cc1855855f4"
	I1217 20:01:24.540354  636903 cri.go:89] found id: "467ab50d14f76d9794b7546e57cbb0eec5d9291e092f5be7dae85296a7ea1b59"
	I1217 20:01:24.540356  636903 cri.go:89] found id: "80c6fccb8bdf5504ced354de5e08d38c6385613976d63820be5bf2822f675a3d"
	I1217 20:01:24.540371  636903 cri.go:89] found id: "294d1768cc9371cf9e11f88d1708895d4e38b481f60bc8fc77e44ab1fb18b5ff"
	I1217 20:01:24.540376  636903 cri.go:89] found id: "75a986f0ae8c399acd6a7e6fb4b4edd21dd8ecafde18a0e3734080cd5e518d63"
	I1217 20:01:24.540379  636903 cri.go:89] found id: ""
	I1217 20:01:24.540429  636903 ssh_runner.go:195] Run: sudo runc list -f json
	I1217 20:01:24.553701  636903 retry.go:31] will retry after 638.801594ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T20:01:24Z" level=error msg="open /run/runc: no such file or directory"
	I1217 20:01:25.193603  636903 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 20:01:25.223140  636903 pause.go:52] kubelet running: false
	I1217 20:01:25.223208  636903 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1217 20:01:25.387441  636903 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1217 20:01:25.387529  636903 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1217 20:01:25.469293  636903 cri.go:89] found id: "464015c6e96083c6df4b19581746c43903d1b30015e9e8e6a22182712cc3e2da"
	I1217 20:01:25.469312  636903 cri.go:89] found id: "ab6e1c127ed17a202d26f0686d15d1e8d81c83b2f3e4ee38703fa2ce3aee6ce2"
	I1217 20:01:25.469316  636903 cri.go:89] found id: "780e65a762a1065439990615f358a4208007b4713894463341d9a2f8f9b91b33"
	I1217 20:01:25.469319  636903 cri.go:89] found id: "71ddc80929603be65503dc71e856358367024bf67d78ffb6c1371882b159eff9"
	I1217 20:01:25.469322  636903 cri.go:89] found id: "3f0565e2bdcd725f2a285b6794d9cb087b195ddb248255a1410193df892996c7"
	I1217 20:01:25.469325  636903 cri.go:89] found id: "484a1e94925a1a7ea27bb0e8881ce92d0ba724ee5dc0be0b55aa22d4968fb0f9"
	I1217 20:01:25.469328  636903 cri.go:89] found id: "71cce81b2a47a327a9532ef2473382c328c9042db27d9361ba053cc1855855f4"
	I1217 20:01:25.469332  636903 cri.go:89] found id: "467ab50d14f76d9794b7546e57cbb0eec5d9291e092f5be7dae85296a7ea1b59"
	I1217 20:01:25.469337  636903 cri.go:89] found id: "80c6fccb8bdf5504ced354de5e08d38c6385613976d63820be5bf2822f675a3d"
	I1217 20:01:25.469357  636903 cri.go:89] found id: "294d1768cc9371cf9e11f88d1708895d4e38b481f60bc8fc77e44ab1fb18b5ff"
	I1217 20:01:25.469361  636903 cri.go:89] found id: "75a986f0ae8c399acd6a7e6fb4b4edd21dd8ecafde18a0e3734080cd5e518d63"
	I1217 20:01:25.469366  636903 cri.go:89] found id: ""
	I1217 20:01:25.469411  636903 ssh_runner.go:195] Run: sudo runc list -f json
	I1217 20:01:25.486832  636903 out.go:203] 
	W1217 20:01:25.488098  636903 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T20:01:25Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T20:01:25Z" level=error msg="open /run/runc: no such file or directory"
	
	W1217 20:01:25.488116  636903 out.go:285] * 
	* 
	W1217 20:01:25.492918  636903 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1217 20:01:25.494182  636903 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p old-k8s-version-894575 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect old-k8s-version-894575
helpers_test.go:244: (dbg) docker inspect old-k8s-version-894575:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "f5ebc1c53bc84c39ca57e291b3d376c12701623821efd7aa06f11ea9e9b21a6c",
	        "Created": "2025-12-17T19:59:10.569830275Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 625646,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-17T20:00:24.54970804Z",
	            "FinishedAt": "2025-12-17T20:00:23.607860294Z"
	        },
	        "Image": "sha256:e3abeb065413b7566dd42e98e204ab3ad174790743f1f5cd427036c11b49d7f1",
	        "ResolvConfPath": "/var/lib/docker/containers/f5ebc1c53bc84c39ca57e291b3d376c12701623821efd7aa06f11ea9e9b21a6c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/f5ebc1c53bc84c39ca57e291b3d376c12701623821efd7aa06f11ea9e9b21a6c/hostname",
	        "HostsPath": "/var/lib/docker/containers/f5ebc1c53bc84c39ca57e291b3d376c12701623821efd7aa06f11ea9e9b21a6c/hosts",
	        "LogPath": "/var/lib/docker/containers/f5ebc1c53bc84c39ca57e291b3d376c12701623821efd7aa06f11ea9e9b21a6c/f5ebc1c53bc84c39ca57e291b3d376c12701623821efd7aa06f11ea9e9b21a6c-json.log",
	        "Name": "/old-k8s-version-894575",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-894575:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-894575",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "f5ebc1c53bc84c39ca57e291b3d376c12701623821efd7aa06f11ea9e9b21a6c",
	                "LowerDir": "/var/lib/docker/overlay2/cf0c071fa6be4c9c271a4ed41c01c193473d129d1f0cbb58862fb849a662aa72-init/diff:/var/lib/docker/overlay2/29727d664a8119dcd8d22d923cfdfa7d86f99088879bf2a113d907b51116eb38/diff",
	                "MergedDir": "/var/lib/docker/overlay2/cf0c071fa6be4c9c271a4ed41c01c193473d129d1f0cbb58862fb849a662aa72/merged",
	                "UpperDir": "/var/lib/docker/overlay2/cf0c071fa6be4c9c271a4ed41c01c193473d129d1f0cbb58862fb849a662aa72/diff",
	                "WorkDir": "/var/lib/docker/overlay2/cf0c071fa6be4c9c271a4ed41c01c193473d129d1f0cbb58862fb849a662aa72/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-894575",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-894575/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-894575",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-894575",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-894575",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "8cac54020272609db5b8b6033223539b316bb31e69b928e889a7e91959c5216b",
	            "SandboxKey": "/var/run/docker/netns/8cac54020272",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33448"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33449"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33452"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33450"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33451"
	                    }
	                ]
	            },
	            "Networks": {
	                "old-k8s-version-894575": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "f0ce1019d98582b4ef902421b21faaa999552d06bbfa4979e1d39a9d27bb73b1",
	                    "EndpointID": "1d28ac80344ae61aefed057e50ab31c5de69173bd1ee4899d222a99d440f22b6",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "MacAddress": "46:5a:2e:8c:ff:ad",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-894575",
	                        "f5ebc1c53bc8"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-894575 -n old-k8s-version-894575
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-894575 -n old-k8s-version-894575: exit status 2 (387.307652ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-894575 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-894575 logs -n 25: (1.276322749s)
helpers_test.go:261: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ stop    │ -p NoKubernetes-327438                                                                                                                                                                                                                        │ NoKubernetes-327438          │ jenkins │ v1.37.0 │ 17 Dec 25 19:58 UTC │ 17 Dec 25 19:58 UTC │
	│ start   │ -p NoKubernetes-327438 --driver=docker  --container-runtime=crio                                                                                                                                                                              │ NoKubernetes-327438          │ jenkins │ v1.37.0 │ 17 Dec 25 19:58 UTC │ 17 Dec 25 19:59 UTC │
	│ ssh     │ cert-options-997440 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-997440          │ jenkins │ v1.37.0 │ 17 Dec 25 19:59 UTC │ 17 Dec 25 19:59 UTC │
	│ ssh     │ -p cert-options-997440 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-997440          │ jenkins │ v1.37.0 │ 17 Dec 25 19:59 UTC │ 17 Dec 25 19:59 UTC │
	│ delete  │ -p cert-options-997440                                                                                                                                                                                                                        │ cert-options-997440          │ jenkins │ v1.37.0 │ 17 Dec 25 19:59 UTC │ 17 Dec 25 19:59 UTC │
	│ ssh     │ -p NoKubernetes-327438 sudo systemctl is-active --quiet service kubelet                                                                                                                                                                       │ NoKubernetes-327438          │ jenkins │ v1.37.0 │ 17 Dec 25 19:59 UTC │                     │
	│ start   │ -p old-k8s-version-894575 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-894575       │ jenkins │ v1.37.0 │ 17 Dec 25 19:59 UTC │ 17 Dec 25 19:59 UTC │
	│ delete  │ -p NoKubernetes-327438                                                                                                                                                                                                                        │ NoKubernetes-327438          │ jenkins │ v1.37.0 │ 17 Dec 25 19:59 UTC │ 17 Dec 25 19:59 UTC │
	│ delete  │ -p disable-driver-mounts-890254                                                                                                                                                                                                               │ disable-driver-mounts-890254 │ jenkins │ v1.37.0 │ 17 Dec 25 19:59 UTC │ 17 Dec 25 19:59 UTC │
	│ start   │ -p no-preload-832842 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1                                                                                  │ no-preload-832842            │ jenkins │ v1.37.0 │ 17 Dec 25 19:59 UTC │ 17 Dec 25 19:59 UTC │
	│ addons  │ enable metrics-server -p no-preload-832842 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-832842            │ jenkins │ v1.37.0 │ 17 Dec 25 20:00 UTC │                     │
	│ stop    │ -p no-preload-832842 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-832842            │ jenkins │ v1.37.0 │ 17 Dec 25 20:00 UTC │ 17 Dec 25 20:00 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-894575 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-894575       │ jenkins │ v1.37.0 │ 17 Dec 25 20:00 UTC │                     │
	│ stop    │ -p old-k8s-version-894575 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-894575       │ jenkins │ v1.37.0 │ 17 Dec 25 20:00 UTC │ 17 Dec 25 20:00 UTC │
	│ addons  │ enable dashboard -p no-preload-832842 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-832842            │ jenkins │ v1.37.0 │ 17 Dec 25 20:00 UTC │ 17 Dec 25 20:00 UTC │
	│ start   │ -p no-preload-832842 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1                                                                                  │ no-preload-832842            │ jenkins │ v1.37.0 │ 17 Dec 25 20:00 UTC │ 17 Dec 25 20:01 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-894575 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-894575       │ jenkins │ v1.37.0 │ 17 Dec 25 20:00 UTC │ 17 Dec 25 20:00 UTC │
	│ start   │ -p old-k8s-version-894575 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-894575       │ jenkins │ v1.37.0 │ 17 Dec 25 20:00 UTC │ 17 Dec 25 20:01 UTC │
	│ start   │ -p cert-expiration-059470 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-059470       │ jenkins │ v1.37.0 │ 17 Dec 25 20:00 UTC │ 17 Dec 25 20:00 UTC │
	│ delete  │ -p cert-expiration-059470                                                                                                                                                                                                                     │ cert-expiration-059470       │ jenkins │ v1.37.0 │ 17 Dec 25 20:00 UTC │ 17 Dec 25 20:00 UTC │
	│ start   │ -p default-k8s-diff-port-759234 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3                                                                      │ default-k8s-diff-port-759234 │ jenkins │ v1.37.0 │ 17 Dec 25 20:00 UTC │                     │
	│ image   │ no-preload-832842 image list --format=json                                                                                                                                                                                                    │ no-preload-832842            │ jenkins │ v1.37.0 │ 17 Dec 25 20:01 UTC │ 17 Dec 25 20:01 UTC │
	│ pause   │ -p no-preload-832842 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-832842            │ jenkins │ v1.37.0 │ 17 Dec 25 20:01 UTC │                     │
	│ image   │ old-k8s-version-894575 image list --format=json                                                                                                                                                                                               │ old-k8s-version-894575       │ jenkins │ v1.37.0 │ 17 Dec 25 20:01 UTC │ 17 Dec 25 20:01 UTC │
	│ pause   │ -p old-k8s-version-894575 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-894575       │ jenkins │ v1.37.0 │ 17 Dec 25 20:01 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/17 20:00:42
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1217 20:00:42.430475  631473 out.go:360] Setting OutFile to fd 1 ...
	I1217 20:00:42.430717  631473 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 20:00:42.430725  631473 out.go:374] Setting ErrFile to fd 2...
	I1217 20:00:42.430734  631473 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 20:00:42.430932  631473 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22186-372245/.minikube/bin
	I1217 20:00:42.431484  631473 out.go:368] Setting JSON to false
	I1217 20:00:42.432651  631473 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":6193,"bootTime":1765995449,"procs":333,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1217 20:00:42.432716  631473 start.go:143] virtualization: kvm guest
	I1217 20:00:42.434554  631473 out.go:179] * [default-k8s-diff-port-759234] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1217 20:00:42.436272  631473 out.go:179]   - MINIKUBE_LOCATION=22186
	I1217 20:00:42.436339  631473 notify.go:221] Checking for updates...
	I1217 20:00:42.438673  631473 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 20:00:42.439791  631473 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22186-372245/kubeconfig
	I1217 20:00:42.444253  631473 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22186-372245/.minikube
	I1217 20:00:42.445569  631473 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1217 20:00:42.446765  631473 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 20:00:42.448395  631473 config.go:182] Loaded profile config "kubernetes-upgrade-322567": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1217 20:00:42.448504  631473 config.go:182] Loaded profile config "no-preload-832842": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1217 20:00:42.448574  631473 config.go:182] Loaded profile config "old-k8s-version-894575": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1217 20:00:42.448676  631473 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 20:00:42.473152  631473 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1217 20:00:42.473303  631473 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 20:00:42.530715  631473 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:76 SystemTime:2025-12-17 20:00:42.520326347 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1217 20:00:42.530839  631473 docker.go:319] overlay module found
	I1217 20:00:42.533607  631473 out.go:179] * Using the docker driver based on user configuration
	I1217 20:00:42.534900  631473 start.go:309] selected driver: docker
	I1217 20:00:42.534931  631473 start.go:927] validating driver "docker" against <nil>
	I1217 20:00:42.534945  631473 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 20:00:42.535594  631473 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 20:00:42.593983  631473 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:76 SystemTime:2025-12-17 20:00:42.584279589 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1217 20:00:42.594185  631473 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1217 20:00:42.594402  631473 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1217 20:00:42.596050  631473 out.go:179] * Using Docker driver with root privileges
	I1217 20:00:42.597217  631473 cni.go:84] Creating CNI manager for ""
	I1217 20:00:42.597290  631473 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1217 20:00:42.597303  631473 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1217 20:00:42.597383  631473 start.go:353] cluster config:
	{Name:default-k8s-diff-port-759234 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:default-k8s-diff-port-759234 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SS
HAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 20:00:42.599022  631473 out.go:179] * Starting "default-k8s-diff-port-759234" primary control-plane node in "default-k8s-diff-port-759234" cluster
	I1217 20:00:42.600540  631473 cache.go:134] Beginning downloading kic base image for docker with crio
	I1217 20:00:42.601819  631473 out.go:179] * Pulling base image v0.0.48-1765966054-22186 ...
	I1217 20:00:42.603027  631473 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1217 20:00:42.603089  631473 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22186-372245/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4
	I1217 20:00:42.603104  631473 cache.go:65] Caching tarball of preloaded images
	I1217 20:00:42.603158  631473 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 in local docker daemon
	I1217 20:00:42.603241  631473 preload.go:238] Found /home/jenkins/minikube-integration/22186-372245/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1217 20:00:42.603255  631473 cache.go:68] Finished verifying existence of preloaded tar for v1.34.3 on crio
	I1217 20:00:42.603409  631473 profile.go:143] Saving config to /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/default-k8s-diff-port-759234/config.json ...
	I1217 20:00:42.603441  631473 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/default-k8s-diff-port-759234/config.json: {Name:mka62982d045e5cb058ac77025f345457b6a6373 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 20:00:42.624544  631473 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 in local docker daemon, skipping pull
	I1217 20:00:42.624564  631473 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 exists in daemon, skipping load
	I1217 20:00:42.624587  631473 cache.go:243] Successfully downloaded all kic artifacts
	I1217 20:00:42.624618  631473 start.go:360] acquireMachinesLock for default-k8s-diff-port-759234: {Name:mk173016aaa355dafae1bd5727aae1037817b426 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 20:00:42.624714  631473 start.go:364] duration metric: took 77.83µs to acquireMachinesLock for "default-k8s-diff-port-759234"
	I1217 20:00:42.624738  631473 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-759234 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:default-k8s-diff-port-759234 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:
false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1217 20:00:42.624812  631473 start.go:125] createHost starting for "" (driver="docker")
	W1217 20:00:39.572913  625400 pod_ready.go:104] pod "coredns-5dd5756b68-gbhs5" is not "Ready", error: <nil>
	W1217 20:00:42.072117  625400 pod_ready.go:104] pod "coredns-5dd5756b68-gbhs5" is not "Ready", error: <nil>
	W1217 20:00:44.072432  625400 pod_ready.go:104] pod "coredns-5dd5756b68-gbhs5" is not "Ready", error: <nil>
	W1217 20:00:42.104752  624471 pod_ready.go:104] pod "coredns-7d764666f9-988jw" is not "Ready", error: <nil>
	W1217 20:00:44.105460  624471 pod_ready.go:104] pod "coredns-7d764666f9-988jw" is not "Ready", error: <nil>
	I1217 20:00:44.011034  596882 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1217 20:00:44.011594  596882 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 20:00:44.011658  596882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 20:00:44.011708  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 20:00:44.044351  596882 cri.go:89] found id: "6822d1aff73905867cd00c8bd3d996a8d98a37c238f53bab351d576f0d6b34fc"
	I1217 20:00:44.044381  596882 cri.go:89] found id: ""
	I1217 20:00:44.044394  596882 logs.go:282] 1 containers: [6822d1aff73905867cd00c8bd3d996a8d98a37c238f53bab351d576f0d6b34fc]
	I1217 20:00:44.044463  596882 ssh_runner.go:195] Run: which crictl
	I1217 20:00:44.049338  596882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 20:00:44.049428  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 20:00:44.080283  596882 cri.go:89] found id: ""
	I1217 20:00:44.080314  596882 logs.go:282] 0 containers: []
	W1217 20:00:44.080326  596882 logs.go:284] No container was found matching "etcd"
	I1217 20:00:44.080337  596882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 20:00:44.080404  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 20:00:44.113789  596882 cri.go:89] found id: ""
	I1217 20:00:44.113818  596882 logs.go:282] 0 containers: []
	W1217 20:00:44.113829  596882 logs.go:284] No container was found matching "coredns"
	I1217 20:00:44.113835  596882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 20:00:44.113889  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 20:00:44.146485  596882 cri.go:89] found id: "26afbca819064c614a7c269e4fbe3f73beb12920c9989c7a9adca8a87b8aee29"
	I1217 20:00:44.146516  596882 cri.go:89] found id: ""
	I1217 20:00:44.146529  596882 logs.go:282] 1 containers: [26afbca819064c614a7c269e4fbe3f73beb12920c9989c7a9adca8a87b8aee29]
	I1217 20:00:44.146598  596882 ssh_runner.go:195] Run: which crictl
	I1217 20:00:44.150860  596882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 20:00:44.150933  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 20:00:44.180612  596882 cri.go:89] found id: ""
	I1217 20:00:44.180648  596882 logs.go:282] 0 containers: []
	W1217 20:00:44.180661  596882 logs.go:284] No container was found matching "kube-proxy"
	I1217 20:00:44.180669  596882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 20:00:44.180733  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 20:00:44.215315  596882 cri.go:89] found id: "deb0ef3d09cc535bcd10a8ecc98a8afc0243fdcaf4256b36cc91b5d3e2c3810c"
	I1217 20:00:44.215341  596882 cri.go:89] found id: ""
	I1217 20:00:44.215351  596882 logs.go:282] 1 containers: [deb0ef3d09cc535bcd10a8ecc98a8afc0243fdcaf4256b36cc91b5d3e2c3810c]
	I1217 20:00:44.215410  596882 ssh_runner.go:195] Run: which crictl
	I1217 20:00:44.219707  596882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 20:00:44.219792  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 20:00:44.250358  596882 cri.go:89] found id: ""
	I1217 20:00:44.250390  596882 logs.go:282] 0 containers: []
	W1217 20:00:44.250402  596882 logs.go:284] No container was found matching "kindnet"
	I1217 20:00:44.250410  596882 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1217 20:00:44.250480  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 20:00:44.279599  596882 cri.go:89] found id: ""
	I1217 20:00:44.279629  596882 logs.go:282] 0 containers: []
	W1217 20:00:44.279639  596882 logs.go:284] No container was found matching "storage-provisioner"
	I1217 20:00:44.279654  596882 logs.go:123] Gathering logs for kubelet ...
	I1217 20:00:44.279673  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 20:00:44.366299  596882 logs.go:123] Gathering logs for dmesg ...
	I1217 20:00:44.366333  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 20:00:44.383253  596882 logs.go:123] Gathering logs for describe nodes ...
	I1217 20:00:44.383288  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 20:00:44.442881  596882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 20:00:44.442906  596882 logs.go:123] Gathering logs for kube-apiserver [6822d1aff73905867cd00c8bd3d996a8d98a37c238f53bab351d576f0d6b34fc] ...
	I1217 20:00:44.442929  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6822d1aff73905867cd00c8bd3d996a8d98a37c238f53bab351d576f0d6b34fc"
	I1217 20:00:44.483060  596882 logs.go:123] Gathering logs for kube-scheduler [26afbca819064c614a7c269e4fbe3f73beb12920c9989c7a9adca8a87b8aee29] ...
	I1217 20:00:44.483124  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 26afbca819064c614a7c269e4fbe3f73beb12920c9989c7a9adca8a87b8aee29"
	I1217 20:00:44.514331  596882 logs.go:123] Gathering logs for kube-controller-manager [deb0ef3d09cc535bcd10a8ecc98a8afc0243fdcaf4256b36cc91b5d3e2c3810c] ...
	I1217 20:00:44.514367  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 deb0ef3d09cc535bcd10a8ecc98a8afc0243fdcaf4256b36cc91b5d3e2c3810c"
	I1217 20:00:44.542722  596882 logs.go:123] Gathering logs for CRI-O ...
	I1217 20:00:44.542760  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 20:00:44.590351  596882 logs.go:123] Gathering logs for container status ...
	I1217 20:00:44.590389  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 20:00:47.127294  596882 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1217 20:00:47.127787  596882 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 20:00:47.127853  596882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 20:00:47.127918  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 20:00:47.156370  596882 cri.go:89] found id: "6822d1aff73905867cd00c8bd3d996a8d98a37c238f53bab351d576f0d6b34fc"
	I1217 20:00:47.156396  596882 cri.go:89] found id: ""
	I1217 20:00:47.156404  596882 logs.go:282] 1 containers: [6822d1aff73905867cd00c8bd3d996a8d98a37c238f53bab351d576f0d6b34fc]
	I1217 20:00:47.156460  596882 ssh_runner.go:195] Run: which crictl
	I1217 20:00:47.160516  596882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 20:00:47.160594  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 20:00:47.195038  596882 cri.go:89] found id: ""
	I1217 20:00:47.195068  596882 logs.go:282] 0 containers: []
	W1217 20:00:47.195137  596882 logs.go:284] No container was found matching "etcd"
	I1217 20:00:47.195143  596882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 20:00:47.195196  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 20:00:47.226808  596882 cri.go:89] found id: ""
	I1217 20:00:47.226835  596882 logs.go:282] 0 containers: []
	W1217 20:00:47.226845  596882 logs.go:284] No container was found matching "coredns"
	I1217 20:00:47.226851  596882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 20:00:47.226903  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 20:00:42.626516  631473 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1217 20:00:42.626787  631473 start.go:159] libmachine.API.Create for "default-k8s-diff-port-759234" (driver="docker")
	I1217 20:00:42.626819  631473 client.go:173] LocalClient.Create starting
	I1217 20:00:42.626888  631473 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22186-372245/.minikube/certs/ca.pem
	I1217 20:00:42.626923  631473 main.go:143] libmachine: Decoding PEM data...
	I1217 20:00:42.626942  631473 main.go:143] libmachine: Parsing certificate...
	I1217 20:00:42.626999  631473 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22186-372245/.minikube/certs/cert.pem
	I1217 20:00:42.627020  631473 main.go:143] libmachine: Decoding PEM data...
	I1217 20:00:42.627031  631473 main.go:143] libmachine: Parsing certificate...
	I1217 20:00:42.627386  631473 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-759234 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1217 20:00:42.645356  631473 cli_runner.go:211] docker network inspect default-k8s-diff-port-759234 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1217 20:00:42.645431  631473 network_create.go:284] running [docker network inspect default-k8s-diff-port-759234] to gather additional debugging logs...
	I1217 20:00:42.645452  631473 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-759234
	W1217 20:00:42.662433  631473 cli_runner.go:211] docker network inspect default-k8s-diff-port-759234 returned with exit code 1
	I1217 20:00:42.662463  631473 network_create.go:287] error running [docker network inspect default-k8s-diff-port-759234]: docker network inspect default-k8s-diff-port-759234: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network default-k8s-diff-port-759234 not found
	I1217 20:00:42.662486  631473 network_create.go:289] output of [docker network inspect default-k8s-diff-port-759234]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network default-k8s-diff-port-759234 not found
	
	** /stderr **
	I1217 20:00:42.662577  631473 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1217 20:00:42.680765  631473 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-f64340259533 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:f6:0a:32:70:0d:35} reservation:<nil>}
	I1217 20:00:42.681557  631473 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-67abe6566c60 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:42:82:43:08:7c:e3} reservation:<nil>}
	I1217 20:00:42.682052  631473 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-f76d03f2ebfd IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:8e:bb:9b:fb:af:46} reservation:<nil>}
	I1217 20:00:42.682584  631473 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-4c731e2a052d IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:4e:e6:a7:52:2c:69} reservation:<nil>}
	I1217 20:00:42.683304  631473 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-f0ce1019d985 IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:26:5a:f7:51:9a:55} reservation:<nil>}
	I1217 20:00:42.684136  631473 network.go:206] using free private subnet 192.168.94.0/24: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001f4b420}
	I1217 20:00:42.684173  631473 network_create.go:124] attempt to create docker network default-k8s-diff-port-759234 192.168.94.0/24 with gateway 192.168.94.1 and MTU of 1500 ...
	I1217 20:00:42.684252  631473 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=default-k8s-diff-port-759234 default-k8s-diff-port-759234
	I1217 20:00:42.733976  631473 network_create.go:108] docker network default-k8s-diff-port-759234 192.168.94.0/24 created
	I1217 20:00:42.734006  631473 kic.go:121] calculated static IP "192.168.94.2" for the "default-k8s-diff-port-759234" container
	I1217 20:00:42.734062  631473 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1217 20:00:42.752583  631473 cli_runner.go:164] Run: docker volume create default-k8s-diff-port-759234 --label name.minikube.sigs.k8s.io=default-k8s-diff-port-759234 --label created_by.minikube.sigs.k8s.io=true
	I1217 20:00:42.773596  631473 oci.go:103] Successfully created a docker volume default-k8s-diff-port-759234
	I1217 20:00:42.773686  631473 cli_runner.go:164] Run: docker run --rm --name default-k8s-diff-port-759234-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-759234 --entrypoint /usr/bin/test -v default-k8s-diff-port-759234:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 -d /var/lib
	I1217 20:00:43.205798  631473 oci.go:107] Successfully prepared a docker volume default-k8s-diff-port-759234
	I1217 20:00:43.205868  631473 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1217 20:00:43.205880  631473 kic.go:194] Starting extracting preloaded images to volume ...
	I1217 20:00:43.205970  631473 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22186-372245/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-759234:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 -I lz4 -xf /preloaded.tar -C /extractDir
	I1217 20:00:47.198577  631473 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22186-372245/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-759234:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 -I lz4 -xf /preloaded.tar -C /extractDir: (3.992562765s)
	I1217 20:00:47.198609  631473 kic.go:203] duration metric: took 3.992725296s to extract preloaded images to volume ...
	W1217 20:00:47.198694  631473 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1217 20:00:47.198723  631473 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1217 20:00:47.198767  631473 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1217 20:00:47.260923  631473 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname default-k8s-diff-port-759234 --name default-k8s-diff-port-759234 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-759234 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=default-k8s-diff-port-759234 --network default-k8s-diff-port-759234 --ip 192.168.94.2 --volume default-k8s-diff-port-759234:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8444 --publish=127.0.0.1::8444 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0
	W1217 20:00:46.572829  625400 pod_ready.go:104] pod "coredns-5dd5756b68-gbhs5" is not "Ready", error: <nil>
	W1217 20:00:49.072264  625400 pod_ready.go:104] pod "coredns-5dd5756b68-gbhs5" is not "Ready", error: <nil>
	W1217 20:00:46.605455  624471 pod_ready.go:104] pod "coredns-7d764666f9-988jw" is not "Ready", error: <nil>
	W1217 20:00:49.104308  624471 pod_ready.go:104] pod "coredns-7d764666f9-988jw" is not "Ready", error: <nil>
	I1217 20:00:47.261698  596882 cri.go:89] found id: "26afbca819064c614a7c269e4fbe3f73beb12920c9989c7a9adca8a87b8aee29"
	I1217 20:00:47.261722  596882 cri.go:89] found id: ""
	I1217 20:00:47.261733  596882 logs.go:282] 1 containers: [26afbca819064c614a7c269e4fbe3f73beb12920c9989c7a9adca8a87b8aee29]
	I1217 20:00:47.261790  596882 ssh_runner.go:195] Run: which crictl
	I1217 20:00:47.267357  596882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 20:00:47.267438  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 20:00:47.306726  596882 cri.go:89] found id: ""
	I1217 20:00:47.306759  596882 logs.go:282] 0 containers: []
	W1217 20:00:47.306770  596882 logs.go:284] No container was found matching "kube-proxy"
	I1217 20:00:47.306778  596882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 20:00:47.306842  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 20:00:47.340875  596882 cri.go:89] found id: "deb0ef3d09cc535bcd10a8ecc98a8afc0243fdcaf4256b36cc91b5d3e2c3810c"
	I1217 20:00:47.340912  596882 cri.go:89] found id: ""
	I1217 20:00:47.340924  596882 logs.go:282] 1 containers: [deb0ef3d09cc535bcd10a8ecc98a8afc0243fdcaf4256b36cc91b5d3e2c3810c]
	I1217 20:00:47.341135  596882 ssh_runner.go:195] Run: which crictl
	I1217 20:00:47.345736  596882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 20:00:47.345806  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 20:00:47.376962  596882 cri.go:89] found id: ""
	I1217 20:00:47.377012  596882 logs.go:282] 0 containers: []
	W1217 20:00:47.377025  596882 logs.go:284] No container was found matching "kindnet"
	I1217 20:00:47.377032  596882 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1217 20:00:47.377124  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 20:00:47.407325  596882 cri.go:89] found id: ""
	I1217 20:00:47.407359  596882 logs.go:282] 0 containers: []
	W1217 20:00:47.407374  596882 logs.go:284] No container was found matching "storage-provisioner"
	I1217 20:00:47.407387  596882 logs.go:123] Gathering logs for describe nodes ...
	I1217 20:00:47.407408  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 20:00:47.473703  596882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 20:00:47.473725  596882 logs.go:123] Gathering logs for kube-apiserver [6822d1aff73905867cd00c8bd3d996a8d98a37c238f53bab351d576f0d6b34fc] ...
	I1217 20:00:47.473743  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6822d1aff73905867cd00c8bd3d996a8d98a37c238f53bab351d576f0d6b34fc"
	I1217 20:00:47.508764  596882 logs.go:123] Gathering logs for kube-scheduler [26afbca819064c614a7c269e4fbe3f73beb12920c9989c7a9adca8a87b8aee29] ...
	I1217 20:00:47.508811  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 26afbca819064c614a7c269e4fbe3f73beb12920c9989c7a9adca8a87b8aee29"
	I1217 20:00:47.539065  596882 logs.go:123] Gathering logs for kube-controller-manager [deb0ef3d09cc535bcd10a8ecc98a8afc0243fdcaf4256b36cc91b5d3e2c3810c] ...
	I1217 20:00:47.539113  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 deb0ef3d09cc535bcd10a8ecc98a8afc0243fdcaf4256b36cc91b5d3e2c3810c"
	I1217 20:00:47.571543  596882 logs.go:123] Gathering logs for CRI-O ...
	I1217 20:00:47.571587  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 20:00:47.643416  596882 logs.go:123] Gathering logs for container status ...
	I1217 20:00:47.643456  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 20:00:47.689273  596882 logs.go:123] Gathering logs for kubelet ...
	I1217 20:00:47.689316  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 20:00:47.823222  596882 logs.go:123] Gathering logs for dmesg ...
	I1217 20:00:47.823260  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 20:00:50.347237  596882 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1217 20:00:50.347659  596882 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 20:00:50.347717  596882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 20:00:50.348197  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 20:00:50.391187  596882 cri.go:89] found id: "6822d1aff73905867cd00c8bd3d996a8d98a37c238f53bab351d576f0d6b34fc"
	I1217 20:00:50.391339  596882 cri.go:89] found id: ""
	I1217 20:00:50.391419  596882 logs.go:282] 1 containers: [6822d1aff73905867cd00c8bd3d996a8d98a37c238f53bab351d576f0d6b34fc]
	I1217 20:00:50.391505  596882 ssh_runner.go:195] Run: which crictl
	I1217 20:00:50.396902  596882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 20:00:50.397015  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 20:00:50.441286  596882 cri.go:89] found id: ""
	I1217 20:00:50.441360  596882 logs.go:282] 0 containers: []
	W1217 20:00:50.441373  596882 logs.go:284] No container was found matching "etcd"
	I1217 20:00:50.441389  596882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 20:00:50.441452  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 20:00:50.479045  596882 cri.go:89] found id: ""
	I1217 20:00:50.479088  596882 logs.go:282] 0 containers: []
	W1217 20:00:50.479100  596882 logs.go:284] No container was found matching "coredns"
	I1217 20:00:50.479108  596882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 20:00:50.479174  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 20:00:50.515926  596882 cri.go:89] found id: "26afbca819064c614a7c269e4fbe3f73beb12920c9989c7a9adca8a87b8aee29"
	I1217 20:00:50.516275  596882 cri.go:89] found id: ""
	I1217 20:00:50.516295  596882 logs.go:282] 1 containers: [26afbca819064c614a7c269e4fbe3f73beb12920c9989c7a9adca8a87b8aee29]
	I1217 20:00:50.516365  596882 ssh_runner.go:195] Run: which crictl
	I1217 20:00:50.522153  596882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 20:00:50.522238  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 20:00:50.562124  596882 cri.go:89] found id: ""
	I1217 20:00:50.562187  596882 logs.go:282] 0 containers: []
	W1217 20:00:50.562199  596882 logs.go:284] No container was found matching "kube-proxy"
	I1217 20:00:50.562208  596882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 20:00:50.562277  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 20:00:50.601222  596882 cri.go:89] found id: "deb0ef3d09cc535bcd10a8ecc98a8afc0243fdcaf4256b36cc91b5d3e2c3810c"
	I1217 20:00:50.601377  596882 cri.go:89] found id: ""
	I1217 20:00:50.601396  596882 logs.go:282] 1 containers: [deb0ef3d09cc535bcd10a8ecc98a8afc0243fdcaf4256b36cc91b5d3e2c3810c]
	I1217 20:00:50.601522  596882 ssh_runner.go:195] Run: which crictl
	I1217 20:00:50.607093  596882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 20:00:50.607179  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 20:00:50.643677  596882 cri.go:89] found id: ""
	I1217 20:00:50.643709  596882 logs.go:282] 0 containers: []
	W1217 20:00:50.643725  596882 logs.go:284] No container was found matching "kindnet"
	I1217 20:00:50.643734  596882 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1217 20:00:50.643810  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 20:00:50.683346  596882 cri.go:89] found id: ""
	I1217 20:00:50.683378  596882 logs.go:282] 0 containers: []
	W1217 20:00:50.683389  596882 logs.go:284] No container was found matching "storage-provisioner"
	I1217 20:00:50.683402  596882 logs.go:123] Gathering logs for kubelet ...
	I1217 20:00:50.683418  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 20:00:50.807284  596882 logs.go:123] Gathering logs for dmesg ...
	I1217 20:00:50.807323  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 20:00:50.829965  596882 logs.go:123] Gathering logs for describe nodes ...
	I1217 20:00:50.830005  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 20:00:50.903560  596882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 20:00:50.903583  596882 logs.go:123] Gathering logs for kube-apiserver [6822d1aff73905867cd00c8bd3d996a8d98a37c238f53bab351d576f0d6b34fc] ...
	I1217 20:00:50.903608  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6822d1aff73905867cd00c8bd3d996a8d98a37c238f53bab351d576f0d6b34fc"
	I1217 20:00:50.952336  596882 logs.go:123] Gathering logs for kube-scheduler [26afbca819064c614a7c269e4fbe3f73beb12920c9989c7a9adca8a87b8aee29] ...
	I1217 20:00:50.952375  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 26afbca819064c614a7c269e4fbe3f73beb12920c9989c7a9adca8a87b8aee29"
	I1217 20:00:50.986508  596882 logs.go:123] Gathering logs for kube-controller-manager [deb0ef3d09cc535bcd10a8ecc98a8afc0243fdcaf4256b36cc91b5d3e2c3810c] ...
	I1217 20:00:50.986545  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 deb0ef3d09cc535bcd10a8ecc98a8afc0243fdcaf4256b36cc91b5d3e2c3810c"
	I1217 20:00:51.022486  596882 logs.go:123] Gathering logs for CRI-O ...
	I1217 20:00:51.022517  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 20:00:51.088659  596882 logs.go:123] Gathering logs for container status ...
	I1217 20:00:51.088715  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 20:00:47.583096  631473 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-759234 --format={{.State.Running}}
	I1217 20:00:47.608914  631473 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-759234 --format={{.State.Status}}
	I1217 20:00:47.634283  631473 cli_runner.go:164] Run: docker exec default-k8s-diff-port-759234 stat /var/lib/dpkg/alternatives/iptables
	I1217 20:00:47.694519  631473 oci.go:144] the created container "default-k8s-diff-port-759234" has a running status.
	I1217 20:00:47.694556  631473 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22186-372245/.minikube/machines/default-k8s-diff-port-759234/id_rsa...
	I1217 20:00:47.741322  631473 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22186-372245/.minikube/machines/default-k8s-diff-port-759234/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1217 20:00:47.777682  631473 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-759234 --format={{.State.Status}}
	I1217 20:00:47.801570  631473 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1217 20:00:47.801595  631473 kic_runner.go:114] Args: [docker exec --privileged default-k8s-diff-port-759234 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1217 20:00:47.858176  631473 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-759234 --format={{.State.Status}}
	I1217 20:00:47.886441  631473 machine.go:94] provisionDockerMachine start ...
	I1217 20:00:47.886562  631473 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-759234
	I1217 20:00:47.913250  631473 main.go:143] libmachine: Using SSH client type: native
	I1217 20:00:47.913628  631473 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33453 <nil> <nil>}
	I1217 20:00:47.913655  631473 main.go:143] libmachine: About to run SSH command:
	hostname
	I1217 20:00:47.914572  631473 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:49044->127.0.0.1:33453: read: connection reset by peer
	I1217 20:00:51.082474  631473 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-759234
	
	I1217 20:00:51.082503  631473 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-759234"
	I1217 20:00:51.082569  631473 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-759234
	I1217 20:00:51.109173  631473 main.go:143] libmachine: Using SSH client type: native
	I1217 20:00:51.109464  631473 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33453 <nil> <nil>}
	I1217 20:00:51.109487  631473 main.go:143] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-759234 && echo "default-k8s-diff-port-759234" | sudo tee /etc/hostname
	I1217 20:00:51.282514  631473 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-759234
	
	I1217 20:00:51.282597  631473 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-759234
	I1217 20:00:51.302139  631473 main.go:143] libmachine: Using SSH client type: native
	I1217 20:00:51.302370  631473 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33453 <nil> <nil>}
	I1217 20:00:51.302388  631473 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-759234' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-759234/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-759234' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1217 20:00:51.456372  631473 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1217 20:00:51.456426  631473 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22186-372245/.minikube CaCertPath:/home/jenkins/minikube-integration/22186-372245/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22186-372245/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22186-372245/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22186-372245/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22186-372245/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22186-372245/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22186-372245/.minikube}
	I1217 20:00:51.456479  631473 ubuntu.go:190] setting up certificates
	I1217 20:00:51.456491  631473 provision.go:84] configureAuth start
	I1217 20:00:51.456563  631473 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-759234
	I1217 20:00:51.480508  631473 provision.go:143] copyHostCerts
	I1217 20:00:51.480576  631473 exec_runner.go:144] found /home/jenkins/minikube-integration/22186-372245/.minikube/key.pem, removing ...
	I1217 20:00:51.480592  631473 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22186-372245/.minikube/key.pem
	I1217 20:00:51.480669  631473 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22186-372245/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22186-372245/.minikube/key.pem (1675 bytes)
	I1217 20:00:51.480772  631473 exec_runner.go:144] found /home/jenkins/minikube-integration/22186-372245/.minikube/ca.pem, removing ...
	I1217 20:00:51.480783  631473 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22186-372245/.minikube/ca.pem
	I1217 20:00:51.480822  631473 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22186-372245/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22186-372245/.minikube/ca.pem (1082 bytes)
	I1217 20:00:51.480896  631473 exec_runner.go:144] found /home/jenkins/minikube-integration/22186-372245/.minikube/cert.pem, removing ...
	I1217 20:00:51.480906  631473 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22186-372245/.minikube/cert.pem
	I1217 20:00:51.480938  631473 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22186-372245/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22186-372245/.minikube/cert.pem (1123 bytes)
	I1217 20:00:51.481006  631473 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22186-372245/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22186-372245/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22186-372245/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-759234 san=[127.0.0.1 192.168.94.2 default-k8s-diff-port-759234 localhost minikube]
	I1217 20:00:51.633655  631473 provision.go:177] copyRemoteCerts
	I1217 20:00:51.633763  631473 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1217 20:00:51.633814  631473 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-759234
	I1217 20:00:51.658060  631473 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33453 SSHKeyPath:/home/jenkins/minikube-integration/22186-372245/.minikube/machines/default-k8s-diff-port-759234/id_rsa Username:docker}
	I1217 20:00:51.774263  631473 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1217 20:00:51.836683  631473 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1217 20:00:51.862224  631473 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1217 20:00:51.890608  631473 provision.go:87] duration metric: took 434.096039ms to configureAuth
	I1217 20:00:51.890644  631473 ubuntu.go:206] setting minikube options for container-runtime
	I1217 20:00:51.890863  631473 config.go:182] Loaded profile config "default-k8s-diff-port-759234": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 20:00:51.891022  631473 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-759234
	I1217 20:00:51.916236  631473 main.go:143] libmachine: Using SSH client type: native
	I1217 20:00:51.916552  631473 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33453 <nil> <nil>}
	I1217 20:00:51.916578  631473 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1217 20:00:52.350209  631473 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1217 20:00:52.350238  631473 machine.go:97] duration metric: took 4.46376868s to provisionDockerMachine
	I1217 20:00:52.350253  631473 client.go:176] duration metric: took 9.723424305s to LocalClient.Create
	I1217 20:00:52.350277  631473 start.go:167] duration metric: took 9.72348972s to libmachine.API.Create "default-k8s-diff-port-759234"
	I1217 20:00:52.350294  631473 start.go:293] postStartSetup for "default-k8s-diff-port-759234" (driver="docker")
	I1217 20:00:52.350305  631473 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1217 20:00:52.350383  631473 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1217 20:00:52.350429  631473 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-759234
	I1217 20:00:52.369228  631473 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33453 SSHKeyPath:/home/jenkins/minikube-integration/22186-372245/.minikube/machines/default-k8s-diff-port-759234/id_rsa Username:docker}
	I1217 20:00:52.477868  631473 ssh_runner.go:195] Run: cat /etc/os-release
	I1217 20:00:52.482314  631473 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1217 20:00:52.482357  631473 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1217 20:00:52.482372  631473 filesync.go:126] Scanning /home/jenkins/minikube-integration/22186-372245/.minikube/addons for local assets ...
	I1217 20:00:52.482454  631473 filesync.go:126] Scanning /home/jenkins/minikube-integration/22186-372245/.minikube/files for local assets ...
	I1217 20:00:52.482534  631473 filesync.go:149] local asset: /home/jenkins/minikube-integration/22186-372245/.minikube/files/etc/ssl/certs/3757972.pem -> 3757972.pem in /etc/ssl/certs
	I1217 20:00:52.482625  631473 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1217 20:00:52.491557  631473 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/files/etc/ssl/certs/3757972.pem --> /etc/ssl/certs/3757972.pem (1708 bytes)
	I1217 20:00:52.515015  631473 start.go:296] duration metric: took 164.702667ms for postStartSetup
	I1217 20:00:52.515418  631473 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-759234
	I1217 20:00:52.535477  631473 profile.go:143] Saving config to /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/default-k8s-diff-port-759234/config.json ...
	I1217 20:00:52.535813  631473 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 20:00:52.535873  631473 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-759234
	I1217 20:00:52.555517  631473 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33453 SSHKeyPath:/home/jenkins/minikube-integration/22186-372245/.minikube/machines/default-k8s-diff-port-759234/id_rsa Username:docker}
	I1217 20:00:52.657422  631473 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1217 20:00:52.662205  631473 start.go:128] duration metric: took 10.037371351s to createHost
	I1217 20:00:52.662241  631473 start.go:83] releasing machines lock for "default-k8s-diff-port-759234", held for 10.037515093s
	I1217 20:00:52.662322  631473 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-759234
	I1217 20:00:52.680193  631473 ssh_runner.go:195] Run: cat /version.json
	I1217 20:00:52.680276  631473 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1217 20:00:52.680310  631473 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-759234
	I1217 20:00:52.680347  631473 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-759234
	I1217 20:00:52.701061  631473 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33453 SSHKeyPath:/home/jenkins/minikube-integration/22186-372245/.minikube/machines/default-k8s-diff-port-759234/id_rsa Username:docker}
	I1217 20:00:52.701301  631473 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33453 SSHKeyPath:/home/jenkins/minikube-integration/22186-372245/.minikube/machines/default-k8s-diff-port-759234/id_rsa Username:docker}
	I1217 20:00:52.851661  631473 ssh_runner.go:195] Run: systemctl --version
	I1217 20:00:52.858481  631473 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1217 20:00:52.893608  631473 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1217 20:00:52.898824  631473 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1217 20:00:52.898902  631473 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1217 20:00:52.924893  631473 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1217 20:00:52.924917  631473 start.go:496] detecting cgroup driver to use...
	I1217 20:00:52.924946  631473 detect.go:190] detected "systemd" cgroup driver on host os
	I1217 20:00:52.924995  631473 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1217 20:00:52.941996  631473 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1217 20:00:52.954497  631473 docker.go:218] disabling cri-docker service (if available) ...
	I1217 20:00:52.954559  631473 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1217 20:00:52.971423  631473 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1217 20:00:52.990488  631473 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1217 20:00:53.079469  631473 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1217 20:00:53.166815  631473 docker.go:234] disabling docker service ...
	I1217 20:00:53.166878  631473 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1217 20:00:53.186920  631473 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1217 20:00:53.200855  631473 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1217 20:00:53.290366  631473 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1217 20:00:53.387334  631473 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1217 20:00:53.400172  631473 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1217 20:00:53.415056  631473 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1217 20:00:53.415136  631473 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:00:53.425540  631473 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1217 20:00:53.425617  631473 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:00:53.435225  631473 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:00:53.444865  631473 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:00:53.455024  631473 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1217 20:00:53.464046  631473 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:00:53.473632  631473 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:00:53.488327  631473 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:00:53.498230  631473 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1217 20:00:53.506887  631473 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1217 20:00:53.516474  631473 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 20:00:53.601252  631473 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1217 20:00:54.068135  631473 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1217 20:00:54.068217  631473 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1217 20:00:54.073472  631473 start.go:564] Will wait 60s for crictl version
	I1217 20:00:54.073554  631473 ssh_runner.go:195] Run: which crictl
	I1217 20:00:54.078383  631473 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1217 20:00:54.106787  631473 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1217 20:00:54.106878  631473 ssh_runner.go:195] Run: crio --version
	I1217 20:00:54.140042  631473 ssh_runner.go:195] Run: crio --version
	I1217 20:00:54.172909  631473 out.go:179] * Preparing Kubernetes v1.34.3 on CRI-O 1.34.3 ...
	W1217 20:00:51.073128  625400 pod_ready.go:104] pod "coredns-5dd5756b68-gbhs5" is not "Ready", error: <nil>
	W1217 20:00:53.572242  625400 pod_ready.go:104] pod "coredns-5dd5756b68-gbhs5" is not "Ready", error: <nil>
	W1217 20:00:51.105457  624471 pod_ready.go:104] pod "coredns-7d764666f9-988jw" is not "Ready", error: <nil>
	W1217 20:00:53.606663  624471 pod_ready.go:104] pod "coredns-7d764666f9-988jw" is not "Ready", error: <nil>
	I1217 20:00:53.632189  596882 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1217 20:00:53.632791  596882 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 20:00:53.632867  596882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 20:00:53.632941  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 20:00:53.662308  596882 cri.go:89] found id: "6822d1aff73905867cd00c8bd3d996a8d98a37c238f53bab351d576f0d6b34fc"
	I1217 20:00:53.662339  596882 cri.go:89] found id: ""
	I1217 20:00:53.662350  596882 logs.go:282] 1 containers: [6822d1aff73905867cd00c8bd3d996a8d98a37c238f53bab351d576f0d6b34fc]
	I1217 20:00:53.662420  596882 ssh_runner.go:195] Run: which crictl
	I1217 20:00:53.666413  596882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 20:00:53.666495  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 20:00:53.695377  596882 cri.go:89] found id: ""
	I1217 20:00:53.695409  596882 logs.go:282] 0 containers: []
	W1217 20:00:53.695421  596882 logs.go:284] No container was found matching "etcd"
	I1217 20:00:53.695429  596882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 20:00:53.695516  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 20:00:53.724146  596882 cri.go:89] found id: ""
	I1217 20:00:53.724177  596882 logs.go:282] 0 containers: []
	W1217 20:00:53.724187  596882 logs.go:284] No container was found matching "coredns"
	I1217 20:00:53.724252  596882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 20:00:53.724349  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 20:00:53.752962  596882 cri.go:89] found id: "26afbca819064c614a7c269e4fbe3f73beb12920c9989c7a9adca8a87b8aee29"
	I1217 20:00:53.752990  596882 cri.go:89] found id: ""
	I1217 20:00:53.753000  596882 logs.go:282] 1 containers: [26afbca819064c614a7c269e4fbe3f73beb12920c9989c7a9adca8a87b8aee29]
	I1217 20:00:53.753058  596882 ssh_runner.go:195] Run: which crictl
	I1217 20:00:53.757461  596882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 20:00:53.757549  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 20:00:53.785748  596882 cri.go:89] found id: ""
	I1217 20:00:53.785774  596882 logs.go:282] 0 containers: []
	W1217 20:00:53.785785  596882 logs.go:284] No container was found matching "kube-proxy"
	I1217 20:00:53.785792  596882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 20:00:53.785862  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 20:00:53.815860  596882 cri.go:89] found id: "deb0ef3d09cc535bcd10a8ecc98a8afc0243fdcaf4256b36cc91b5d3e2c3810c"
	I1217 20:00:53.815889  596882 cri.go:89] found id: ""
	I1217 20:00:53.815899  596882 logs.go:282] 1 containers: [deb0ef3d09cc535bcd10a8ecc98a8afc0243fdcaf4256b36cc91b5d3e2c3810c]
	I1217 20:00:53.815952  596882 ssh_runner.go:195] Run: which crictl
	I1217 20:00:53.820565  596882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 20:00:53.820632  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 20:00:53.847814  596882 cri.go:89] found id: ""
	I1217 20:00:53.847839  596882 logs.go:282] 0 containers: []
	W1217 20:00:53.847850  596882 logs.go:284] No container was found matching "kindnet"
	I1217 20:00:53.847857  596882 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1217 20:00:53.847920  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 20:00:53.876185  596882 cri.go:89] found id: ""
	I1217 20:00:53.876218  596882 logs.go:282] 0 containers: []
	W1217 20:00:53.876230  596882 logs.go:284] No container was found matching "storage-provisioner"
	I1217 20:00:53.876244  596882 logs.go:123] Gathering logs for kubelet ...
	I1217 20:00:53.876259  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 20:00:53.971642  596882 logs.go:123] Gathering logs for dmesg ...
	I1217 20:00:53.971693  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 20:00:53.990638  596882 logs.go:123] Gathering logs for describe nodes ...
	I1217 20:00:53.990675  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 20:00:54.050668  596882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 20:00:54.050692  596882 logs.go:123] Gathering logs for kube-apiserver [6822d1aff73905867cd00c8bd3d996a8d98a37c238f53bab351d576f0d6b34fc] ...
	I1217 20:00:54.050707  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6822d1aff73905867cd00c8bd3d996a8d98a37c238f53bab351d576f0d6b34fc"
	I1217 20:00:54.084846  596882 logs.go:123] Gathering logs for kube-scheduler [26afbca819064c614a7c269e4fbe3f73beb12920c9989c7a9adca8a87b8aee29] ...
	I1217 20:00:54.084893  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 26afbca819064c614a7c269e4fbe3f73beb12920c9989c7a9adca8a87b8aee29"
	I1217 20:00:54.115061  596882 logs.go:123] Gathering logs for kube-controller-manager [deb0ef3d09cc535bcd10a8ecc98a8afc0243fdcaf4256b36cc91b5d3e2c3810c] ...
	I1217 20:00:54.115108  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 deb0ef3d09cc535bcd10a8ecc98a8afc0243fdcaf4256b36cc91b5d3e2c3810c"
	I1217 20:00:54.146463  596882 logs.go:123] Gathering logs for CRI-O ...
	I1217 20:00:54.146491  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 20:00:54.199121  596882 logs.go:123] Gathering logs for container status ...
	I1217 20:00:54.199159  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 20:00:56.736153  596882 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1217 20:00:56.736638  596882 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 20:00:56.736693  596882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 20:00:56.736746  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 20:00:56.765576  596882 cri.go:89] found id: "6822d1aff73905867cd00c8bd3d996a8d98a37c238f53bab351d576f0d6b34fc"
	I1217 20:00:56.765600  596882 cri.go:89] found id: ""
	I1217 20:00:56.765610  596882 logs.go:282] 1 containers: [6822d1aff73905867cd00c8bd3d996a8d98a37c238f53bab351d576f0d6b34fc]
	I1217 20:00:56.765676  596882 ssh_runner.go:195] Run: which crictl
	I1217 20:00:56.769942  596882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 20:00:56.770013  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 20:00:56.798112  596882 cri.go:89] found id: ""
	I1217 20:00:56.798145  596882 logs.go:282] 0 containers: []
	W1217 20:00:56.798157  596882 logs.go:284] No container was found matching "etcd"
	I1217 20:00:56.798165  596882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 20:00:56.798234  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 20:00:56.825167  596882 cri.go:89] found id: ""
	I1217 20:00:56.825200  596882 logs.go:282] 0 containers: []
	W1217 20:00:56.825231  596882 logs.go:284] No container was found matching "coredns"
	I1217 20:00:56.825247  596882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 20:00:56.825311  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 20:00:56.852568  596882 cri.go:89] found id: "26afbca819064c614a7c269e4fbe3f73beb12920c9989c7a9adca8a87b8aee29"
	I1217 20:00:56.852592  596882 cri.go:89] found id: ""
	I1217 20:00:56.852602  596882 logs.go:282] 1 containers: [26afbca819064c614a7c269e4fbe3f73beb12920c9989c7a9adca8a87b8aee29]
	I1217 20:00:56.852661  596882 ssh_runner.go:195] Run: which crictl
	I1217 20:00:56.856829  596882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 20:00:56.856902  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 20:00:56.883929  596882 cri.go:89] found id: ""
	I1217 20:00:56.883973  596882 logs.go:282] 0 containers: []
	W1217 20:00:56.883986  596882 logs.go:284] No container was found matching "kube-proxy"
	I1217 20:00:56.883999  596882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 20:00:56.884062  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 20:00:56.911693  596882 cri.go:89] found id: "deb0ef3d09cc535bcd10a8ecc98a8afc0243fdcaf4256b36cc91b5d3e2c3810c"
	I1217 20:00:56.911714  596882 cri.go:89] found id: ""
	I1217 20:00:56.911722  596882 logs.go:282] 1 containers: [deb0ef3d09cc535bcd10a8ecc98a8afc0243fdcaf4256b36cc91b5d3e2c3810c]
	I1217 20:00:56.911772  596882 ssh_runner.go:195] Run: which crictl
	I1217 20:00:56.916212  596882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 20:00:56.916276  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 20:00:56.942585  596882 cri.go:89] found id: ""
	I1217 20:00:56.942617  596882 logs.go:282] 0 containers: []
	W1217 20:00:56.942633  596882 logs.go:284] No container was found matching "kindnet"
	I1217 20:00:56.942642  596882 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1217 20:00:56.942700  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 20:00:56.971939  596882 cri.go:89] found id: ""
	I1217 20:00:56.971976  596882 logs.go:282] 0 containers: []
	W1217 20:00:56.971990  596882 logs.go:284] No container was found matching "storage-provisioner"
	I1217 20:00:56.972004  596882 logs.go:123] Gathering logs for kube-scheduler [26afbca819064c614a7c269e4fbe3f73beb12920c9989c7a9adca8a87b8aee29] ...
	I1217 20:00:56.972024  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 26afbca819064c614a7c269e4fbe3f73beb12920c9989c7a9adca8a87b8aee29"
	I1217 20:00:57.001777  596882 logs.go:123] Gathering logs for kube-controller-manager [deb0ef3d09cc535bcd10a8ecc98a8afc0243fdcaf4256b36cc91b5d3e2c3810c] ...
	I1217 20:00:57.001806  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 deb0ef3d09cc535bcd10a8ecc98a8afc0243fdcaf4256b36cc91b5d3e2c3810c"
	I1217 20:00:57.032936  596882 logs.go:123] Gathering logs for CRI-O ...
	I1217 20:00:57.032965  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 20:00:57.078327  596882 logs.go:123] Gathering logs for container status ...
	I1217 20:00:57.078364  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 20:00:57.113176  596882 logs.go:123] Gathering logs for kubelet ...
	I1217 20:00:57.113213  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 20:00:57.201920  596882 logs.go:123] Gathering logs for dmesg ...
	I1217 20:00:57.201957  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 20:00:57.218426  596882 logs.go:123] Gathering logs for describe nodes ...
	I1217 20:00:57.218456  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1217 20:00:54.174562  631473 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-759234 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1217 20:00:54.194566  631473 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1217 20:00:54.199116  631473 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 20:00:54.210935  631473 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-759234 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:default-k8s-diff-port-759234 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false C
ustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1217 20:00:54.211103  631473 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1217 20:00:54.211184  631473 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 20:00:54.248494  631473 crio.go:514] all images are preloaded for cri-o runtime.
	I1217 20:00:54.248518  631473 crio.go:433] Images already preloaded, skipping extraction
	I1217 20:00:54.248568  631473 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 20:00:54.273697  631473 crio.go:514] all images are preloaded for cri-o runtime.
	I1217 20:00:54.273726  631473 cache_images.go:86] Images are preloaded, skipping loading
	I1217 20:00:54.273735  631473 kubeadm.go:935] updating node { 192.168.94.2 8444 v1.34.3 crio true true} ...
	I1217 20:00:54.273832  631473 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-759234 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.3 ClusterName:default-k8s-diff-port-759234 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1217 20:00:54.273935  631473 ssh_runner.go:195] Run: crio config
	I1217 20:00:54.323646  631473 cni.go:84] Creating CNI manager for ""
	I1217 20:00:54.323671  631473 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1217 20:00:54.323691  631473 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1217 20:00:54.323723  631473 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8444 KubernetesVersion:v1.34.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-759234 NodeName:default-k8s-diff-port-759234 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1217 20:00:54.323843  631473 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-759234"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1217 20:00:54.323910  631473 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.3
	I1217 20:00:54.333287  631473 binaries.go:51] Found k8s binaries, skipping transfer
	I1217 20:00:54.333359  631473 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1217 20:00:54.341865  631473 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1217 20:00:54.355367  631473 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1217 20:00:54.370136  631473 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2224 bytes)
	I1217 20:00:54.383695  631473 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1217 20:00:54.387416  631473 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 20:00:54.397752  631473 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 20:00:54.478375  631473 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 20:00:54.502901  631473 certs.go:69] Setting up /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/default-k8s-diff-port-759234 for IP: 192.168.94.2
	I1217 20:00:54.502928  631473 certs.go:195] generating shared ca certs ...
	I1217 20:00:54.502956  631473 certs.go:227] acquiring lock for ca certs: {Name:mk6c0a4a99609de13fb0b54aca94f9165cc7856c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 20:00:54.503145  631473 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22186-372245/.minikube/ca.key
	I1217 20:00:54.503202  631473 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22186-372245/.minikube/proxy-client-ca.key
	I1217 20:00:54.503217  631473 certs.go:257] generating profile certs ...
	I1217 20:00:54.503295  631473 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/default-k8s-diff-port-759234/client.key
	I1217 20:00:54.503322  631473 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/default-k8s-diff-port-759234/client.crt with IP's: []
	I1217 20:00:54.617711  631473 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/default-k8s-diff-port-759234/client.crt ...
	I1217 20:00:54.617747  631473 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/default-k8s-diff-port-759234/client.crt: {Name:mk5d78d7f68addaf1f73847c6c02fd442f5e6ddd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 20:00:54.617930  631473 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/default-k8s-diff-port-759234/client.key ...
	I1217 20:00:54.617950  631473 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/default-k8s-diff-port-759234/client.key: {Name:mke8a415d0af374cf9fe8570e6fe4c7202332109 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 20:00:54.618032  631473 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/default-k8s-diff-port-759234/apiserver.key.e1807167
	I1217 20:00:54.618049  631473 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/default-k8s-diff-port-759234/apiserver.crt.e1807167 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.94.2]
	I1217 20:00:54.665685  631473 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/default-k8s-diff-port-759234/apiserver.crt.e1807167 ...
	I1217 20:00:54.665716  631473 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/default-k8s-diff-port-759234/apiserver.crt.e1807167: {Name:mkfcccc5ab764237ebc01d7e772bd39ad2e57805 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 20:00:54.665884  631473 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/default-k8s-diff-port-759234/apiserver.key.e1807167 ...
	I1217 20:00:54.665904  631473 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/default-k8s-diff-port-759234/apiserver.key.e1807167: {Name:mk4c6de11c85c3fb77bd1f278ce0e0fd2b33aff3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 20:00:54.666008  631473 certs.go:382] copying /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/default-k8s-diff-port-759234/apiserver.crt.e1807167 -> /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/default-k8s-diff-port-759234/apiserver.crt
	I1217 20:00:54.666104  631473 certs.go:386] copying /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/default-k8s-diff-port-759234/apiserver.key.e1807167 -> /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/default-k8s-diff-port-759234/apiserver.key
	I1217 20:00:54.666162  631473 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/default-k8s-diff-port-759234/proxy-client.key
	I1217 20:00:54.666178  631473 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/default-k8s-diff-port-759234/proxy-client.crt with IP's: []
	I1217 20:00:54.735423  631473 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/default-k8s-diff-port-759234/proxy-client.crt ...
	I1217 20:00:54.735452  631473 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/default-k8s-diff-port-759234/proxy-client.crt: {Name:mk6946a87226d60c386ab3fc364ed99a58d10cba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 20:00:54.735624  631473 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/default-k8s-diff-port-759234/proxy-client.key ...
	I1217 20:00:54.735638  631473 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/default-k8s-diff-port-759234/proxy-client.key: {Name:mk6cae84f91184f3a12c3274f32b7e32ae6eea78 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 20:00:54.735804  631473 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-372245/.minikube/certs/375797.pem (1338 bytes)
	W1217 20:00:54.735844  631473 certs.go:480] ignoring /home/jenkins/minikube-integration/22186-372245/.minikube/certs/375797_empty.pem, impossibly tiny 0 bytes
	I1217 20:00:54.735855  631473 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-372245/.minikube/certs/ca-key.pem (1675 bytes)
	I1217 20:00:54.735877  631473 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-372245/.minikube/certs/ca.pem (1082 bytes)
	I1217 20:00:54.735901  631473 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-372245/.minikube/certs/cert.pem (1123 bytes)
	I1217 20:00:54.735925  631473 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-372245/.minikube/certs/key.pem (1675 bytes)
	I1217 20:00:54.735974  631473 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-372245/.minikube/files/etc/ssl/certs/3757972.pem (1708 bytes)
	I1217 20:00:54.736625  631473 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1217 20:00:54.756198  631473 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1217 20:00:54.773753  631473 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1217 20:00:54.791250  631473 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1217 20:00:54.809439  631473 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/default-k8s-diff-port-759234/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1217 20:00:54.828101  631473 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/default-k8s-diff-port-759234/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1217 20:00:54.847713  631473 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/default-k8s-diff-port-759234/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1217 20:00:54.866560  631473 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/default-k8s-diff-port-759234/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1217 20:00:54.885184  631473 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/files/etc/ssl/certs/3757972.pem --> /usr/share/ca-certificates/3757972.pem (1708 bytes)
	I1217 20:00:54.906455  631473 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 20:00:54.924265  631473 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/certs/375797.pem --> /usr/share/ca-certificates/375797.pem (1338 bytes)
	I1217 20:00:54.942817  631473 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1217 20:00:54.956309  631473 ssh_runner.go:195] Run: openssl version
	I1217 20:00:54.962641  631473 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3757972.pem
	I1217 20:00:54.971170  631473 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3757972.pem /etc/ssl/certs/3757972.pem
	I1217 20:00:54.979233  631473 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3757972.pem
	I1217 20:00:54.983177  631473 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 17 19:32 /usr/share/ca-certificates/3757972.pem
	I1217 20:00:54.983245  631473 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3757972.pem
	I1217 20:00:55.018977  631473 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1217 20:00:55.027253  631473 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/3757972.pem /etc/ssl/certs/3ec20f2e.0
	I1217 20:00:55.035165  631473 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:00:55.043017  631473 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1217 20:00:55.051440  631473 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:00:55.055458  631473 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 19:24 /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:00:55.055523  631473 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:00:55.092379  631473 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1217 20:00:55.101231  631473 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1217 20:00:55.111064  631473 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/375797.pem
	I1217 20:00:55.119199  631473 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/375797.pem /etc/ssl/certs/375797.pem
	I1217 20:00:55.127063  631473 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/375797.pem
	I1217 20:00:55.130993  631473 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 17 19:32 /usr/share/ca-certificates/375797.pem
	I1217 20:00:55.131062  631473 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/375797.pem
	I1217 20:00:55.165321  631473 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1217 20:00:55.173294  631473 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/375797.pem /etc/ssl/certs/51391683.0
	I1217 20:00:55.181422  631473 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1217 20:00:55.185376  631473 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1217 20:00:55.185448  631473 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-759234 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:default-k8s-diff-port-759234 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cust
omQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 20:00:55.185546  631473 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1217 20:00:55.185607  631473 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 20:00:55.217477  631473 cri.go:89] found id: ""
	I1217 20:00:55.217551  631473 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1217 20:00:55.226933  631473 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1217 20:00:55.236854  631473 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1217 20:00:55.236934  631473 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1217 20:00:55.245579  631473 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1217 20:00:55.245602  631473 kubeadm.go:158] found existing configuration files:
	
	I1217 20:00:55.245652  631473 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1217 20:00:55.253938  631473 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1217 20:00:55.253998  631473 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1217 20:00:55.261865  631473 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1217 20:00:55.269887  631473 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1217 20:00:55.269992  631473 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1217 20:00:55.278000  631473 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1217 20:00:55.286714  631473 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1217 20:00:55.286788  631473 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1217 20:00:55.296035  631473 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1217 20:00:55.305037  631473 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1217 20:00:55.305131  631473 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1217 20:00:55.312998  631473 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1217 20:00:55.373971  631473 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1045-gcp\n", err: exit status 1
	I1217 20:00:55.436480  631473 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W1217 20:00:56.071929  625400 pod_ready.go:104] pod "coredns-5dd5756b68-gbhs5" is not "Ready", error: <nil>
	W1217 20:00:58.571128  625400 pod_ready.go:104] pod "coredns-5dd5756b68-gbhs5" is not "Ready", error: <nil>
	W1217 20:00:56.104574  624471 pod_ready.go:104] pod "coredns-7d764666f9-988jw" is not "Ready", error: <nil>
	W1217 20:00:58.604838  624471 pod_ready.go:104] pod "coredns-7d764666f9-988jw" is not "Ready", error: <nil>
	W1217 20:00:57.277327  596882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 20:00:57.277349  596882 logs.go:123] Gathering logs for kube-apiserver [6822d1aff73905867cd00c8bd3d996a8d98a37c238f53bab351d576f0d6b34fc] ...
	I1217 20:00:57.277366  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6822d1aff73905867cd00c8bd3d996a8d98a37c238f53bab351d576f0d6b34fc"
	I1217 20:00:59.811179  596882 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	W1217 20:01:01.071960  625400 pod_ready.go:104] pod "coredns-5dd5756b68-gbhs5" is not "Ready", error: <nil>
	W1217 20:01:03.571727  625400 pod_ready.go:104] pod "coredns-5dd5756b68-gbhs5" is not "Ready", error: <nil>
	W1217 20:01:00.604975  624471 pod_ready.go:104] pod "coredns-7d764666f9-988jw" is not "Ready", error: <nil>
	W1217 20:01:02.605263  624471 pod_ready.go:104] pod "coredns-7d764666f9-988jw" is not "Ready", error: <nil>
	W1217 20:01:05.106561  624471 pod_ready.go:104] pod "coredns-7d764666f9-988jw" is not "Ready", error: <nil>
	I1217 20:01:06.067126  631473 kubeadm.go:319] [init] Using Kubernetes version: v1.34.3
	I1217 20:01:06.067196  631473 kubeadm.go:319] [preflight] Running pre-flight checks
	I1217 20:01:06.067312  631473 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1217 20:01:06.067401  631473 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1045-gcp
	I1217 20:01:06.067442  631473 kubeadm.go:319] OS: Linux
	I1217 20:01:06.067513  631473 kubeadm.go:319] CGROUPS_CPU: enabled
	I1217 20:01:06.067558  631473 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1217 20:01:06.067635  631473 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1217 20:01:06.067697  631473 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1217 20:01:06.067738  631473 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1217 20:01:06.067813  631473 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1217 20:01:06.067880  631473 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1217 20:01:06.067957  631473 kubeadm.go:319] CGROUPS_IO: enabled
	I1217 20:01:06.068050  631473 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1217 20:01:06.068197  631473 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1217 20:01:06.068340  631473 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1217 20:01:06.068462  631473 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1217 20:01:06.070305  631473 out.go:252]   - Generating certificates and keys ...
	I1217 20:01:06.070395  631473 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1217 20:01:06.070458  631473 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1217 20:01:06.070524  631473 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1217 20:01:06.070580  631473 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1217 20:01:06.070634  631473 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1217 20:01:06.070675  631473 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1217 20:01:06.070722  631473 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1217 20:01:06.070887  631473 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [default-k8s-diff-port-759234 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1217 20:01:06.070954  631473 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1217 20:01:06.071106  631473 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [default-k8s-diff-port-759234 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1217 20:01:06.071215  631473 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1217 20:01:06.071290  631473 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1217 20:01:06.071343  631473 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1217 20:01:06.071423  631473 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1217 20:01:06.071499  631473 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1217 20:01:06.071573  631473 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1217 20:01:06.071647  631473 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1217 20:01:06.071757  631473 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1217 20:01:06.071841  631473 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1217 20:01:06.071959  631473 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1217 20:01:06.072065  631473 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1217 20:01:06.073367  631473 out.go:252]   - Booting up control plane ...
	I1217 20:01:06.073455  631473 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1217 20:01:06.073530  631473 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1217 20:01:06.073591  631473 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1217 20:01:06.073692  631473 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1217 20:01:06.073789  631473 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1217 20:01:06.073886  631473 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1217 20:01:06.073960  631473 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1217 20:01:06.074002  631473 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1217 20:01:06.074140  631473 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1217 20:01:06.074228  631473 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1217 20:01:06.074276  631473 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001922128s
	I1217 20:01:06.074352  631473 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1217 20:01:06.074416  631473 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.94.2:8444/livez
	I1217 20:01:06.074487  631473 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1217 20:01:06.074549  631473 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1217 20:01:06.074624  631473 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.929603333s
	I1217 20:01:06.074691  631473 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.11807832s
	I1217 20:01:06.074783  631473 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.002138646s
	I1217 20:01:06.074883  631473 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1217 20:01:06.074999  631473 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1217 20:01:06.075046  631473 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1217 20:01:06.075233  631473 kubeadm.go:319] [mark-control-plane] Marking the node default-k8s-diff-port-759234 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1217 20:01:06.075296  631473 kubeadm.go:319] [bootstrap-token] Using token: v6m366.ufgpfn05m87tgdpr
	I1217 20:01:06.076758  631473 out.go:252]   - Configuring RBAC rules ...
	I1217 20:01:06.076848  631473 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1217 20:01:06.076928  631473 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1217 20:01:06.077189  631473 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1217 20:01:06.077365  631473 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1217 20:01:06.077488  631473 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1217 20:01:06.077579  631473 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1217 20:01:06.077727  631473 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1217 20:01:06.077797  631473 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1217 20:01:06.077864  631473 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1217 20:01:06.077879  631473 kubeadm.go:319] 
	I1217 20:01:06.077952  631473 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1217 20:01:06.077959  631473 kubeadm.go:319] 
	I1217 20:01:06.078019  631473 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1217 20:01:06.078028  631473 kubeadm.go:319] 
	I1217 20:01:06.078048  631473 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1217 20:01:06.078140  631473 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1217 20:01:06.078221  631473 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1217 20:01:06.078230  631473 kubeadm.go:319] 
	I1217 20:01:06.078313  631473 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1217 20:01:06.078322  631473 kubeadm.go:319] 
	I1217 20:01:06.078396  631473 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1217 20:01:06.078404  631473 kubeadm.go:319] 
	I1217 20:01:06.078487  631473 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1217 20:01:06.078589  631473 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1217 20:01:06.078685  631473 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1217 20:01:06.078694  631473 kubeadm.go:319] 
	I1217 20:01:06.078778  631473 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1217 20:01:06.078851  631473 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1217 20:01:06.078857  631473 kubeadm.go:319] 
	I1217 20:01:06.078933  631473 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8444 --token v6m366.ufgpfn05m87tgdpr \
	I1217 20:01:06.079036  631473 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:8ef867ecc15c7bd9eb9f87ba84e4b5e1f9c90bbe1fbebab60bd7b5b08cd9129f \
	I1217 20:01:06.079057  631473 kubeadm.go:319] 	--control-plane 
	I1217 20:01:06.079060  631473 kubeadm.go:319] 
	I1217 20:01:06.079150  631473 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1217 20:01:06.079160  631473 kubeadm.go:319] 
	I1217 20:01:06.079259  631473 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8444 --token v6m366.ufgpfn05m87tgdpr \
	I1217 20:01:06.079417  631473 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:8ef867ecc15c7bd9eb9f87ba84e4b5e1f9c90bbe1fbebab60bd7b5b08cd9129f 
	I1217 20:01:06.079446  631473 cni.go:84] Creating CNI manager for ""
	I1217 20:01:06.079457  631473 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1217 20:01:06.081231  631473 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1217 20:01:04.812163  596882 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1217 20:01:04.812235  596882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 20:01:04.812292  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 20:01:04.844291  596882 cri.go:89] found id: "dfcf129a23a9b4b8338549662d03dc9674e70494089b9acbd56ee28dd0e59a2e"
	I1217 20:01:04.844315  596882 cri.go:89] found id: "6822d1aff73905867cd00c8bd3d996a8d98a37c238f53bab351d576f0d6b34fc"
	I1217 20:01:04.844319  596882 cri.go:89] found id: ""
	I1217 20:01:04.844328  596882 logs.go:282] 2 containers: [dfcf129a23a9b4b8338549662d03dc9674e70494089b9acbd56ee28dd0e59a2e 6822d1aff73905867cd00c8bd3d996a8d98a37c238f53bab351d576f0d6b34fc]
	I1217 20:01:04.844385  596882 ssh_runner.go:195] Run: which crictl
	I1217 20:01:04.848366  596882 ssh_runner.go:195] Run: which crictl
	I1217 20:01:04.852177  596882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 20:01:04.852256  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 20:01:04.883987  596882 cri.go:89] found id: ""
	I1217 20:01:04.884024  596882 logs.go:282] 0 containers: []
	W1217 20:01:04.884038  596882 logs.go:284] No container was found matching "etcd"
	I1217 20:01:04.884051  596882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 20:01:04.884140  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 20:01:04.914990  596882 cri.go:89] found id: ""
	I1217 20:01:04.915020  596882 logs.go:282] 0 containers: []
	W1217 20:01:04.915031  596882 logs.go:284] No container was found matching "coredns"
	I1217 20:01:04.915040  596882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 20:01:04.915135  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 20:01:04.944932  596882 cri.go:89] found id: "26afbca819064c614a7c269e4fbe3f73beb12920c9989c7a9adca8a87b8aee29"
	I1217 20:01:04.944965  596882 cri.go:89] found id: ""
	I1217 20:01:04.944978  596882 logs.go:282] 1 containers: [26afbca819064c614a7c269e4fbe3f73beb12920c9989c7a9adca8a87b8aee29]
	I1217 20:01:04.945047  596882 ssh_runner.go:195] Run: which crictl
	I1217 20:01:04.949407  596882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 20:01:04.949476  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 20:01:04.980714  596882 cri.go:89] found id: ""
	I1217 20:01:04.980744  596882 logs.go:282] 0 containers: []
	W1217 20:01:04.980756  596882 logs.go:284] No container was found matching "kube-proxy"
	I1217 20:01:04.980765  596882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 20:01:04.980827  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 20:01:05.014278  596882 cri.go:89] found id: "711081a1b65cc9754b1a9b8fd19fce7769b6a8e65b097e062aa1703f24e1a476"
	I1217 20:01:05.014303  596882 cri.go:89] found id: "deb0ef3d09cc535bcd10a8ecc98a8afc0243fdcaf4256b36cc91b5d3e2c3810c"
	I1217 20:01:05.014306  596882 cri.go:89] found id: ""
	I1217 20:01:05.014315  596882 logs.go:282] 2 containers: [711081a1b65cc9754b1a9b8fd19fce7769b6a8e65b097e062aa1703f24e1a476 deb0ef3d09cc535bcd10a8ecc98a8afc0243fdcaf4256b36cc91b5d3e2c3810c]
	I1217 20:01:05.014369  596882 ssh_runner.go:195] Run: which crictl
	I1217 20:01:05.019212  596882 ssh_runner.go:195] Run: which crictl
	I1217 20:01:05.023605  596882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 20:01:05.023688  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 20:01:05.054178  596882 cri.go:89] found id: ""
	I1217 20:01:05.054210  596882 logs.go:282] 0 containers: []
	W1217 20:01:05.054220  596882 logs.go:284] No container was found matching "kindnet"
	I1217 20:01:05.054226  596882 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1217 20:01:05.054297  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 20:01:05.089365  596882 cri.go:89] found id: ""
	I1217 20:01:05.089398  596882 logs.go:282] 0 containers: []
	W1217 20:01:05.089410  596882 logs.go:284] No container was found matching "storage-provisioner"
	I1217 20:01:05.089432  596882 logs.go:123] Gathering logs for container status ...
	I1217 20:01:05.089451  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 20:01:05.129946  596882 logs.go:123] Gathering logs for kubelet ...
	I1217 20:01:05.129977  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 20:01:05.229093  596882 logs.go:123] Gathering logs for describe nodes ...
	I1217 20:01:05.229136  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1217 20:01:06.082676  631473 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1217 20:01:06.087568  631473 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.3/kubectl ...
	I1217 20:01:06.087588  631473 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2620 bytes)
	I1217 20:01:06.101995  631473 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1217 20:01:06.315905  631473 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1217 20:01:06.315984  631473 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 20:01:06.315984  631473 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-759234 minikube.k8s.io/updated_at=2025_12_17T20_01_06_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=2e96f676eb7e96389e85fe0658a4ede4c4ba6924 minikube.k8s.io/name=default-k8s-diff-port-759234 minikube.k8s.io/primary=true
	I1217 20:01:06.327829  631473 ops.go:34] apiserver oom_adj: -16
	I1217 20:01:06.396458  631473 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 20:01:06.897042  631473 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 20:01:07.396599  631473 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 20:01:07.604674  624471 pod_ready.go:94] pod "coredns-7d764666f9-988jw" is "Ready"
	I1217 20:01:07.604701  624471 pod_ready.go:86] duration metric: took 37.00583192s for pod "coredns-7d764666f9-988jw" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:01:07.607174  624471 pod_ready.go:83] waiting for pod "etcd-no-preload-832842" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:01:07.611282  624471 pod_ready.go:94] pod "etcd-no-preload-832842" is "Ready"
	I1217 20:01:07.611311  624471 pod_ready.go:86] duration metric: took 4.112039ms for pod "etcd-no-preload-832842" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:01:07.613297  624471 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-832842" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:01:07.617064  624471 pod_ready.go:94] pod "kube-apiserver-no-preload-832842" is "Ready"
	I1217 20:01:07.617117  624471 pod_ready.go:86] duration metric: took 3.797766ms for pod "kube-apiserver-no-preload-832842" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:01:07.619212  624471 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-832842" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:01:07.803328  624471 pod_ready.go:94] pod "kube-controller-manager-no-preload-832842" is "Ready"
	I1217 20:01:07.803357  624471 pod_ready.go:86] duration metric: took 184.117172ms for pod "kube-controller-manager-no-preload-832842" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:01:08.003550  624471 pod_ready.go:83] waiting for pod "kube-proxy-jc5dd" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:01:08.403261  624471 pod_ready.go:94] pod "kube-proxy-jc5dd" is "Ready"
	I1217 20:01:08.403288  624471 pod_ready.go:86] duration metric: took 399.709625ms for pod "kube-proxy-jc5dd" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:01:08.603502  624471 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-832842" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:01:09.002875  624471 pod_ready.go:94] pod "kube-scheduler-no-preload-832842" is "Ready"
	I1217 20:01:09.002905  624471 pod_ready.go:86] duration metric: took 399.378114ms for pod "kube-scheduler-no-preload-832842" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:01:09.002919  624471 pod_ready.go:40] duration metric: took 38.408153316s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1217 20:01:09.051128  624471 start.go:625] kubectl: 1.35.0, cluster: 1.35.0-rc.1 (minor skew: 0)
	I1217 20:01:09.053534  624471 out.go:179] * Done! kubectl is now configured to use "no-preload-832842" cluster and "default" namespace by default
	W1217 20:01:06.072320  625400 pod_ready.go:104] pod "coredns-5dd5756b68-gbhs5" is not "Ready", error: <nil>
	W1217 20:01:08.571546  625400 pod_ready.go:104] pod "coredns-5dd5756b68-gbhs5" is not "Ready", error: <nil>
	I1217 20:01:07.897116  631473 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 20:01:08.397124  631473 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 20:01:08.897399  631473 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 20:01:09.397296  631473 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 20:01:09.897202  631473 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 20:01:10.397310  631473 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 20:01:10.897175  631473 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 20:01:10.975504  631473 kubeadm.go:1114] duration metric: took 4.659591269s to wait for elevateKubeSystemPrivileges
	I1217 20:01:10.975540  631473 kubeadm.go:403] duration metric: took 15.790098497s to StartCluster
	I1217 20:01:10.975558  631473 settings.go:142] acquiring lock: {Name:mk01c60672ff2b8f50b037d6096a0a4590636830 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 20:01:10.975646  631473 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22186-372245/kubeconfig
	I1217 20:01:10.977547  631473 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-372245/kubeconfig: {Name:mkbe8926b9014d2af611aee93b1188b72880b6c1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 20:01:10.977796  631473 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1217 20:01:10.977817  631473 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1217 20:01:10.977867  631473 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1217 20:01:10.978006  631473 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-759234"
	I1217 20:01:10.978029  631473 config.go:182] Loaded profile config "default-k8s-diff-port-759234": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 20:01:10.978054  631473 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-759234"
	I1217 20:01:10.978101  631473 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-759234"
	I1217 20:01:10.978031  631473 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-759234"
	I1217 20:01:10.978248  631473 host.go:66] Checking if "default-k8s-diff-port-759234" exists ...
	I1217 20:01:10.978539  631473 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-759234 --format={{.State.Status}}
	I1217 20:01:10.978747  631473 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-759234 --format={{.State.Status}}
	I1217 20:01:10.979515  631473 out.go:179] * Verifying Kubernetes components...
	I1217 20:01:10.980948  631473 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 20:01:11.004351  631473 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1217 20:01:09.570523  625400 pod_ready.go:94] pod "coredns-5dd5756b68-gbhs5" is "Ready"
	I1217 20:01:09.570551  625400 pod_ready.go:86] duration metric: took 34.005219617s for pod "coredns-5dd5756b68-gbhs5" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:01:09.573051  625400 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-894575" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:01:09.576701  625400 pod_ready.go:94] pod "etcd-old-k8s-version-894575" is "Ready"
	I1217 20:01:09.576725  625400 pod_ready.go:86] duration metric: took 3.651465ms for pod "etcd-old-k8s-version-894575" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:01:09.579243  625400 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-894575" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:01:09.583452  625400 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-894575" is "Ready"
	I1217 20:01:09.583478  625400 pod_ready.go:86] duration metric: took 4.213779ms for pod "kube-apiserver-old-k8s-version-894575" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:01:09.585997  625400 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-894575" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:01:09.768942  625400 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-894575" is "Ready"
	I1217 20:01:09.768977  625400 pod_ready.go:86] duration metric: took 182.957254ms for pod "kube-controller-manager-old-k8s-version-894575" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:01:09.970200  625400 pod_ready.go:83] waiting for pod "kube-proxy-bdzb6" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:01:10.368408  625400 pod_ready.go:94] pod "kube-proxy-bdzb6" is "Ready"
	I1217 20:01:10.368435  625400 pod_ready.go:86] duration metric: took 398.20631ms for pod "kube-proxy-bdzb6" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:01:10.569794  625400 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-894575" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:01:10.969210  625400 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-894575" is "Ready"
	I1217 20:01:10.969252  625400 pod_ready.go:86] duration metric: took 399.426249ms for pod "kube-scheduler-old-k8s-version-894575" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:01:10.969270  625400 pod_ready.go:40] duration metric: took 35.409804659s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1217 20:01:11.041190  625400 start.go:625] kubectl: 1.35.0, cluster: 1.28.0 (minor skew: 7)
	I1217 20:01:11.044208  625400 out.go:203] 
	W1217 20:01:11.045630  625400 out.go:285] ! /usr/local/bin/kubectl is version 1.35.0, which may have incompatibilities with Kubernetes 1.28.0.
	I1217 20:01:11.047652  625400 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1217 20:01:11.049163  625400 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-894575" cluster and "default" namespace by default
	I1217 20:01:11.005141  631473 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-759234"
	I1217 20:01:11.005190  631473 host.go:66] Checking if "default-k8s-diff-port-759234" exists ...
	I1217 20:01:11.005673  631473 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-759234 --format={{.State.Status}}
	I1217 20:01:11.005685  631473 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 20:01:11.005702  631473 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1217 20:01:11.005753  631473 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-759234
	I1217 20:01:11.034589  631473 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33453 SSHKeyPath:/home/jenkins/minikube-integration/22186-372245/.minikube/machines/default-k8s-diff-port-759234/id_rsa Username:docker}
	I1217 20:01:11.037037  631473 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1217 20:01:11.037065  631473 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1217 20:01:11.037212  631473 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-759234
	I1217 20:01:11.065091  631473 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33453 SSHKeyPath:/home/jenkins/minikube-integration/22186-372245/.minikube/machines/default-k8s-diff-port-759234/id_rsa Username:docker}
	I1217 20:01:11.078156  631473 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.94.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1217 20:01:11.158438  631473 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 20:01:11.173742  631473 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 20:01:11.214719  631473 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1217 20:01:11.376291  631473 start.go:977] {"host.minikube.internal": 192.168.94.1} host record injected into CoreDNS's ConfigMap
	I1217 20:01:11.376906  631473 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-759234" to be "Ready" ...
	I1217 20:01:11.616252  631473 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1217 20:01:11.617452  631473 addons.go:530] duration metric: took 639.583404ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1217 20:01:11.880698  631473 kapi.go:214] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-759234" context rescaled to 1 replicas
	I1217 20:01:15.295985  596882 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (10.066827019s)
	W1217 20:01:15.296022  596882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Unable to connect to the server: net/http: TLS handshake timeout
	 output: 
	** stderr ** 
	Unable to connect to the server: net/http: TLS handshake timeout
	
	** /stderr **
	I1217 20:01:15.296032  596882 logs.go:123] Gathering logs for kube-apiserver [6822d1aff73905867cd00c8bd3d996a8d98a37c238f53bab351d576f0d6b34fc] ...
	I1217 20:01:15.296044  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6822d1aff73905867cd00c8bd3d996a8d98a37c238f53bab351d576f0d6b34fc"
	I1217 20:01:15.329910  596882 logs.go:123] Gathering logs for kube-controller-manager [deb0ef3d09cc535bcd10a8ecc98a8afc0243fdcaf4256b36cc91b5d3e2c3810c] ...
	I1217 20:01:15.329943  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 deb0ef3d09cc535bcd10a8ecc98a8afc0243fdcaf4256b36cc91b5d3e2c3810c"
	I1217 20:01:15.361430  596882 logs.go:123] Gathering logs for dmesg ...
	I1217 20:01:15.361465  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 20:01:15.379135  596882 logs.go:123] Gathering logs for kube-apiserver [dfcf129a23a9b4b8338549662d03dc9674e70494089b9acbd56ee28dd0e59a2e] ...
	I1217 20:01:15.379176  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 dfcf129a23a9b4b8338549662d03dc9674e70494089b9acbd56ee28dd0e59a2e"
	I1217 20:01:15.413631  596882 logs.go:123] Gathering logs for kube-scheduler [26afbca819064c614a7c269e4fbe3f73beb12920c9989c7a9adca8a87b8aee29] ...
	I1217 20:01:15.413671  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 26afbca819064c614a7c269e4fbe3f73beb12920c9989c7a9adca8a87b8aee29"
	I1217 20:01:15.444072  596882 logs.go:123] Gathering logs for kube-controller-manager [711081a1b65cc9754b1a9b8fd19fce7769b6a8e65b097e062aa1703f24e1a476] ...
	I1217 20:01:15.444120  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 711081a1b65cc9754b1a9b8fd19fce7769b6a8e65b097e062aa1703f24e1a476"
	I1217 20:01:15.474296  596882 logs.go:123] Gathering logs for CRI-O ...
	I1217 20:01:15.474325  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	W1217 20:01:13.379733  631473 node_ready.go:57] node "default-k8s-diff-port-759234" has "Ready":"False" status (will retry)
	W1217 20:01:15.380677  631473 node_ready.go:57] node "default-k8s-diff-port-759234" has "Ready":"False" status (will retry)
	W1217 20:01:17.382167  631473 node_ready.go:57] node "default-k8s-diff-port-759234" has "Ready":"False" status (will retry)
	I1217 20:01:18.028829  596882 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1217 20:01:19.268145  596882 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": read tcp 192.168.76.1:48746->192.168.76.2:8443: read: connection reset by peer
	I1217 20:01:19.268222  596882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 20:01:19.268292  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 20:01:19.297951  596882 cri.go:89] found id: "dfcf129a23a9b4b8338549662d03dc9674e70494089b9acbd56ee28dd0e59a2e"
	I1217 20:01:19.297972  596882 cri.go:89] found id: "6822d1aff73905867cd00c8bd3d996a8d98a37c238f53bab351d576f0d6b34fc"
	I1217 20:01:19.297976  596882 cri.go:89] found id: ""
	I1217 20:01:19.297984  596882 logs.go:282] 2 containers: [dfcf129a23a9b4b8338549662d03dc9674e70494089b9acbd56ee28dd0e59a2e 6822d1aff73905867cd00c8bd3d996a8d98a37c238f53bab351d576f0d6b34fc]
	I1217 20:01:19.298048  596882 ssh_runner.go:195] Run: which crictl
	I1217 20:01:19.302214  596882 ssh_runner.go:195] Run: which crictl
	I1217 20:01:19.305947  596882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 20:01:19.306014  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 20:01:19.333763  596882 cri.go:89] found id: ""
	I1217 20:01:19.333789  596882 logs.go:282] 0 containers: []
	W1217 20:01:19.333798  596882 logs.go:284] No container was found matching "etcd"
	I1217 20:01:19.333804  596882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 20:01:19.333864  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 20:01:19.362644  596882 cri.go:89] found id: ""
	I1217 20:01:19.362672  596882 logs.go:282] 0 containers: []
	W1217 20:01:19.362682  596882 logs.go:284] No container was found matching "coredns"
	I1217 20:01:19.362687  596882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 20:01:19.362752  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 20:01:19.394030  596882 cri.go:89] found id: "26afbca819064c614a7c269e4fbe3f73beb12920c9989c7a9adca8a87b8aee29"
	I1217 20:01:19.394059  596882 cri.go:89] found id: ""
	I1217 20:01:19.394071  596882 logs.go:282] 1 containers: [26afbca819064c614a7c269e4fbe3f73beb12920c9989c7a9adca8a87b8aee29]
	I1217 20:01:19.394157  596882 ssh_runner.go:195] Run: which crictl
	I1217 20:01:19.398506  596882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 20:01:19.398583  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 20:01:19.425535  596882 cri.go:89] found id: ""
	I1217 20:01:19.425560  596882 logs.go:282] 0 containers: []
	W1217 20:01:19.425569  596882 logs.go:284] No container was found matching "kube-proxy"
	I1217 20:01:19.425575  596882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 20:01:19.425638  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 20:01:19.454704  596882 cri.go:89] found id: "711081a1b65cc9754b1a9b8fd19fce7769b6a8e65b097e062aa1703f24e1a476"
	I1217 20:01:19.454726  596882 cri.go:89] found id: "deb0ef3d09cc535bcd10a8ecc98a8afc0243fdcaf4256b36cc91b5d3e2c3810c"
	I1217 20:01:19.454731  596882 cri.go:89] found id: ""
	I1217 20:01:19.454743  596882 logs.go:282] 2 containers: [711081a1b65cc9754b1a9b8fd19fce7769b6a8e65b097e062aa1703f24e1a476 deb0ef3d09cc535bcd10a8ecc98a8afc0243fdcaf4256b36cc91b5d3e2c3810c]
	I1217 20:01:19.454811  596882 ssh_runner.go:195] Run: which crictl
	I1217 20:01:19.459054  596882 ssh_runner.go:195] Run: which crictl
	I1217 20:01:19.463029  596882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 20:01:19.463111  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 20:01:19.491583  596882 cri.go:89] found id: ""
	I1217 20:01:19.491610  596882 logs.go:282] 0 containers: []
	W1217 20:01:19.491622  596882 logs.go:284] No container was found matching "kindnet"
	I1217 20:01:19.491631  596882 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1217 20:01:19.491688  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 20:01:19.520292  596882 cri.go:89] found id: ""
	I1217 20:01:19.520328  596882 logs.go:282] 0 containers: []
	W1217 20:01:19.520341  596882 logs.go:284] No container was found matching "storage-provisioner"
	I1217 20:01:19.520364  596882 logs.go:123] Gathering logs for kubelet ...
	I1217 20:01:19.520390  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 20:01:19.604632  596882 logs.go:123] Gathering logs for dmesg ...
	I1217 20:01:19.604674  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 20:01:19.621452  596882 logs.go:123] Gathering logs for describe nodes ...
	I1217 20:01:19.621486  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 20:01:19.680554  596882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 20:01:19.680581  596882 logs.go:123] Gathering logs for kube-apiserver [dfcf129a23a9b4b8338549662d03dc9674e70494089b9acbd56ee28dd0e59a2e] ...
	I1217 20:01:19.680597  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 dfcf129a23a9b4b8338549662d03dc9674e70494089b9acbd56ee28dd0e59a2e"
	I1217 20:01:19.712658  596882 logs.go:123] Gathering logs for kube-controller-manager [deb0ef3d09cc535bcd10a8ecc98a8afc0243fdcaf4256b36cc91b5d3e2c3810c] ...
	I1217 20:01:19.712693  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 deb0ef3d09cc535bcd10a8ecc98a8afc0243fdcaf4256b36cc91b5d3e2c3810c"
	I1217 20:01:19.740964  596882 logs.go:123] Gathering logs for container status ...
	I1217 20:01:19.740997  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 20:01:19.773014  596882 logs.go:123] Gathering logs for kube-apiserver [6822d1aff73905867cd00c8bd3d996a8d98a37c238f53bab351d576f0d6b34fc] ...
	I1217 20:01:19.773045  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6822d1aff73905867cd00c8bd3d996a8d98a37c238f53bab351d576f0d6b34fc"
	W1217 20:01:19.802765  596882 logs.go:130] failed kube-apiserver [6822d1aff73905867cd00c8bd3d996a8d98a37c238f53bab351d576f0d6b34fc]: command: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6822d1aff73905867cd00c8bd3d996a8d98a37c238f53bab351d576f0d6b34fc" /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6822d1aff73905867cd00c8bd3d996a8d98a37c238f53bab351d576f0d6b34fc": Process exited with status 1
	stdout:
	
	stderr:
	E1217 20:01:19.800342    5778 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6822d1aff73905867cd00c8bd3d996a8d98a37c238f53bab351d576f0d6b34fc\": container with ID starting with 6822d1aff73905867cd00c8bd3d996a8d98a37c238f53bab351d576f0d6b34fc not found: ID does not exist" containerID="6822d1aff73905867cd00c8bd3d996a8d98a37c238f53bab351d576f0d6b34fc"
	time="2025-12-17T20:01:19Z" level=fatal msg="rpc error: code = NotFound desc = could not find container \"6822d1aff73905867cd00c8bd3d996a8d98a37c238f53bab351d576f0d6b34fc\": container with ID starting with 6822d1aff73905867cd00c8bd3d996a8d98a37c238f53bab351d576f0d6b34fc not found: ID does not exist"
	 output: 
	** stderr ** 
	E1217 20:01:19.800342    5778 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6822d1aff73905867cd00c8bd3d996a8d98a37c238f53bab351d576f0d6b34fc\": container with ID starting with 6822d1aff73905867cd00c8bd3d996a8d98a37c238f53bab351d576f0d6b34fc not found: ID does not exist" containerID="6822d1aff73905867cd00c8bd3d996a8d98a37c238f53bab351d576f0d6b34fc"
	time="2025-12-17T20:01:19Z" level=fatal msg="rpc error: code = NotFound desc = could not find container \"6822d1aff73905867cd00c8bd3d996a8d98a37c238f53bab351d576f0d6b34fc\": container with ID starting with 6822d1aff73905867cd00c8bd3d996a8d98a37c238f53bab351d576f0d6b34fc not found: ID does not exist"
	
	** /stderr **
	I1217 20:01:19.802797  596882 logs.go:123] Gathering logs for kube-scheduler [26afbca819064c614a7c269e4fbe3f73beb12920c9989c7a9adca8a87b8aee29] ...
	I1217 20:01:19.802814  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 26afbca819064c614a7c269e4fbe3f73beb12920c9989c7a9adca8a87b8aee29"
	I1217 20:01:19.830245  596882 logs.go:123] Gathering logs for kube-controller-manager [711081a1b65cc9754b1a9b8fd19fce7769b6a8e65b097e062aa1703f24e1a476] ...
	I1217 20:01:19.830272  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 711081a1b65cc9754b1a9b8fd19fce7769b6a8e65b097e062aa1703f24e1a476"
	I1217 20:01:19.857816  596882 logs.go:123] Gathering logs for CRI-O ...
	I1217 20:01:19.857846  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	W1217 20:01:19.879976  631473 node_ready.go:57] node "default-k8s-diff-port-759234" has "Ready":"False" status (will retry)
	W1217 20:01:21.880734  631473 node_ready.go:57] node "default-k8s-diff-port-759234" has "Ready":"False" status (will retry)
	
	
	==> CRI-O <==
	Dec 17 20:00:53 old-k8s-version-894575 crio[569]: time="2025-12-17T20:00:53.330714955Z" level=info msg="Started container" PID=1740 containerID=66f27a1cc9b649019a571f7ba9e5a7ceb6356098743d0b857d825bd8df809387 description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-5hjsp/dashboard-metrics-scraper id=0c0265a2-b75a-40ed-ac4c-eb8098f422d1 name=/runtime.v1.RuntimeService/StartContainer sandboxID=40425cb0fde9ee8d85f21bb137c410e64765d8ece68848c3e3ed94ea57e56ba9
	Dec 17 20:00:54 old-k8s-version-894575 crio[569]: time="2025-12-17T20:00:54.285423666Z" level=info msg="Removing container: c33fb87cb51628bc9612395483e504a89240391a0076300e43ff9e5c0a7be036" id=3fe17c36-dea4-4ba1-b602-6be600c26069 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 17 20:00:54 old-k8s-version-894575 crio[569]: time="2025-12-17T20:00:54.295531789Z" level=info msg="Removed container c33fb87cb51628bc9612395483e504a89240391a0076300e43ff9e5c0a7be036: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-5hjsp/dashboard-metrics-scraper" id=3fe17c36-dea4-4ba1-b602-6be600c26069 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 17 20:01:05 old-k8s-version-894575 crio[569]: time="2025-12-17T20:01:05.313914963Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=92ea9391-8b27-4e33-9581-34fd903fe249 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 20:01:05 old-k8s-version-894575 crio[569]: time="2025-12-17T20:01:05.316020157Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=ca8fc3e9-b245-4c74-9f76-b633d8e962a2 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 20:01:05 old-k8s-version-894575 crio[569]: time="2025-12-17T20:01:05.317581554Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=f7874b29-0a0c-4331-bedb-30cb4e1a1749 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 17 20:01:05 old-k8s-version-894575 crio[569]: time="2025-12-17T20:01:05.317735426Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 20:01:05 old-k8s-version-894575 crio[569]: time="2025-12-17T20:01:05.322597201Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 20:01:05 old-k8s-version-894575 crio[569]: time="2025-12-17T20:01:05.322782153Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/779521eae0643fb583d61062bdbaa1ac73ade81b2c991635f89de4b746dd3145/merged/etc/passwd: no such file or directory"
	Dec 17 20:01:05 old-k8s-version-894575 crio[569]: time="2025-12-17T20:01:05.322809813Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/779521eae0643fb583d61062bdbaa1ac73ade81b2c991635f89de4b746dd3145/merged/etc/group: no such file or directory"
	Dec 17 20:01:05 old-k8s-version-894575 crio[569]: time="2025-12-17T20:01:05.323138775Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 20:01:05 old-k8s-version-894575 crio[569]: time="2025-12-17T20:01:05.366308643Z" level=info msg="Created container 464015c6e96083c6df4b19581746c43903d1b30015e9e8e6a22182712cc3e2da: kube-system/storage-provisioner/storage-provisioner" id=f7874b29-0a0c-4331-bedb-30cb4e1a1749 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 17 20:01:05 old-k8s-version-894575 crio[569]: time="2025-12-17T20:01:05.366939188Z" level=info msg="Starting container: 464015c6e96083c6df4b19581746c43903d1b30015e9e8e6a22182712cc3e2da" id=98f96614-c7b0-4501-9396-270f3c640c30 name=/runtime.v1.RuntimeService/StartContainer
	Dec 17 20:01:05 old-k8s-version-894575 crio[569]: time="2025-12-17T20:01:05.368711176Z" level=info msg="Started container" PID=1754 containerID=464015c6e96083c6df4b19581746c43903d1b30015e9e8e6a22182712cc3e2da description=kube-system/storage-provisioner/storage-provisioner id=98f96614-c7b0-4501-9396-270f3c640c30 name=/runtime.v1.RuntimeService/StartContainer sandboxID=2edc8c4cafe694fa961529bb9164cf0914e5280093ae57b33a8d2a47c8edb95a
	Dec 17 20:01:11 old-k8s-version-894575 crio[569]: time="2025-12-17T20:01:11.189395255Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=cd75b443-5122-449c-88d3-6d4839eb17af name=/runtime.v1.ImageService/ImageStatus
	Dec 17 20:01:11 old-k8s-version-894575 crio[569]: time="2025-12-17T20:01:11.192794917Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=d1424d61-faf8-4319-8deb-f80a09cb877b name=/runtime.v1.ImageService/ImageStatus
	Dec 17 20:01:11 old-k8s-version-894575 crio[569]: time="2025-12-17T20:01:11.194456882Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-5hjsp/dashboard-metrics-scraper" id=14dfc5c7-0541-4d01-8c00-7e0e4aa67951 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 17 20:01:11 old-k8s-version-894575 crio[569]: time="2025-12-17T20:01:11.194611308Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 20:01:11 old-k8s-version-894575 crio[569]: time="2025-12-17T20:01:11.205445404Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 20:01:11 old-k8s-version-894575 crio[569]: time="2025-12-17T20:01:11.206212472Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 20:01:11 old-k8s-version-894575 crio[569]: time="2025-12-17T20:01:11.250993166Z" level=info msg="Created container 294d1768cc9371cf9e11f88d1708895d4e38b481f60bc8fc77e44ab1fb18b5ff: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-5hjsp/dashboard-metrics-scraper" id=14dfc5c7-0541-4d01-8c00-7e0e4aa67951 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 17 20:01:11 old-k8s-version-894575 crio[569]: time="2025-12-17T20:01:11.252981831Z" level=info msg="Starting container: 294d1768cc9371cf9e11f88d1708895d4e38b481f60bc8fc77e44ab1fb18b5ff" id=11cdb0b3-f2db-4b24-9941-0169dc938f3a name=/runtime.v1.RuntimeService/StartContainer
	Dec 17 20:01:11 old-k8s-version-894575 crio[569]: time="2025-12-17T20:01:11.255288122Z" level=info msg="Started container" PID=1773 containerID=294d1768cc9371cf9e11f88d1708895d4e38b481f60bc8fc77e44ab1fb18b5ff description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-5hjsp/dashboard-metrics-scraper id=11cdb0b3-f2db-4b24-9941-0169dc938f3a name=/runtime.v1.RuntimeService/StartContainer sandboxID=40425cb0fde9ee8d85f21bb137c410e64765d8ece68848c3e3ed94ea57e56ba9
	Dec 17 20:01:11 old-k8s-version-894575 crio[569]: time="2025-12-17T20:01:11.336176576Z" level=info msg="Removing container: 66f27a1cc9b649019a571f7ba9e5a7ceb6356098743d0b857d825bd8df809387" id=c466adae-b5b4-4518-9c57-dae981e76481 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 17 20:01:11 old-k8s-version-894575 crio[569]: time="2025-12-17T20:01:11.349011952Z" level=info msg="Removed container 66f27a1cc9b649019a571f7ba9e5a7ceb6356098743d0b857d825bd8df809387: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-5hjsp/dashboard-metrics-scraper" id=c466adae-b5b4-4518-9c57-dae981e76481 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                              NAMESPACE
	294d1768cc937       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           15 seconds ago      Exited              dashboard-metrics-scraper   2                   40425cb0fde9e       dashboard-metrics-scraper-5f989dc9cf-5hjsp       kubernetes-dashboard
	464015c6e9608       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           21 seconds ago      Running             storage-provisioner         1                   2edc8c4cafe69       storage-provisioner                              kube-system
	75a986f0ae8c3       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   36 seconds ago      Running             kubernetes-dashboard        0                   e02d0f1da3335       kubernetes-dashboard-8694d4445c-jb6px            kubernetes-dashboard
	ab6e1c127ed17       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                           52 seconds ago      Running             coredns                     0                   a450ad554f409       coredns-5dd5756b68-gbhs5                         kube-system
	241b33f7c414a       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           52 seconds ago      Running             busybox                     1                   1e67ad478e572       busybox                                          default
	780e65a762a10       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           52 seconds ago      Exited              storage-provisioner         0                   2edc8c4cafe69       storage-provisioner                              kube-system
	71ddc80929603       ea1030da44aa18666a7bf15fddd2a38c3143c3277159cb8bdd95f45c8ce62d7a                                           52 seconds ago      Running             kube-proxy                  0                   d89f825e36d9e       kube-proxy-bdzb6                                 kube-system
	3f0565e2bdcd7       4921d7a6dffa922dd679732ba4797085c4f39e9a53bee8b6fdb1d463e8571251                                           52 seconds ago      Running             kindnet-cni                 0                   75b79f69f05ae       kindnet-p8d9f                                    kube-system
	484a1e94925a1       bb5e0dde9054c02d6badee88547be7e7bb7b7b818d277c8a61b4b29484bbff95                                           55 seconds ago      Running             kube-apiserver              0                   76d00afd11470       kube-apiserver-old-k8s-version-894575            kube-system
	71cce81b2a47a       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                           55 seconds ago      Running             etcd                        0                   79ef6facea046       etcd-old-k8s-version-894575                      kube-system
	467ab50d14f76       4be79c38a4bab6e1252a35697500e8a0d9c5c7c771d9fcc1935c9a7f6cdf4c62                                           55 seconds ago      Running             kube-controller-manager     0                   25967367ece24       kube-controller-manager-old-k8s-version-894575   kube-system
	80c6fccb8bdf5       f6f496300a2ae7a6727ccf3080d66d2fd22b6cfc271df5351c976c23a28bb157                                           55 seconds ago      Running             kube-scheduler              0                   76c7f66b3532d       kube-scheduler-old-k8s-version-894575            kube-system
	
	
	==> coredns [ab6e1c127ed17a202d26f0686d15d1e8d81c83b2f3e4ee38703fa2ce3aee6ce2] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 8aa94104b4dae56b00431f7362ac05b997af2246775de35dc2eb361b0707b2fa7199f9ddfdba27fdef1331b76d09c41700f6cb5d00836dabab7c0df8e651283f
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:44913 - 16568 "HINFO IN 4519636891788960163.4803829897480741640. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.015751219s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               old-k8s-version-894575
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-894575
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2e96f676eb7e96389e85fe0658a4ede4c4ba6924
	                    minikube.k8s.io/name=old-k8s-version-894575
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_17T19_59_29_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Dec 2025 19:59:25 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-894575
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Dec 2025 20:01:15 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Dec 2025 20:01:04 +0000   Wed, 17 Dec 2025 19:59:25 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Dec 2025 20:01:04 +0000   Wed, 17 Dec 2025 19:59:25 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Dec 2025 20:01:04 +0000   Wed, 17 Dec 2025 19:59:25 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Dec 2025 20:01:04 +0000   Wed, 17 Dec 2025 19:59:54 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    old-k8s-version-894575
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 99cc213c06a11cdf07b2a4d26942818a
	  System UUID:                f9507002-721b-4e21-9c9c-8a3faf234561
	  Boot ID:                    832664c8-407a-4bff-a432-3bbc3f20421e
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         89s
	  kube-system                 coredns-5dd5756b68-gbhs5                          100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     104s
	  kube-system                 etcd-old-k8s-version-894575                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         118s
	  kube-system                 kindnet-p8d9f                                     100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      104s
	  kube-system                 kube-apiserver-old-k8s-version-894575             250m (3%)     0 (0%)      0 (0%)           0 (0%)         118s
	  kube-system                 kube-controller-manager-old-k8s-version-894575    200m (2%)     0 (0%)      0 (0%)           0 (0%)         2m
	  kube-system                 kube-proxy-bdzb6                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         104s
	  kube-system                 kube-scheduler-old-k8s-version-894575             100m (1%)     0 (0%)      0 (0%)           0 (0%)         118s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         103s
	  kubernetes-dashboard        dashboard-metrics-scraper-5f989dc9cf-5hjsp        0 (0%)        0 (0%)      0 (0%)           0 (0%)         40s
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-jb6px             0 (0%)        0 (0%)      0 (0%)           0 (0%)         40s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 103s                 kube-proxy       
	  Normal  Starting                 51s                  kube-proxy       
	  Normal  Starting                 2m5s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m4s (x8 over 2m5s)  kubelet          Node old-k8s-version-894575 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m4s (x8 over 2m5s)  kubelet          Node old-k8s-version-894575 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m4s (x8 over 2m5s)  kubelet          Node old-k8s-version-894575 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    118s                 kubelet          Node old-k8s-version-894575 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  118s                 kubelet          Node old-k8s-version-894575 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     118s                 kubelet          Node old-k8s-version-894575 status is now: NodeHasSufficientPID
	  Normal  Starting                 118s                 kubelet          Starting kubelet.
	  Normal  RegisteredNode           105s                 node-controller  Node old-k8s-version-894575 event: Registered Node old-k8s-version-894575 in Controller
	  Normal  NodeReady                92s                  kubelet          Node old-k8s-version-894575 status is now: NodeReady
	  Normal  Starting                 55s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  55s (x8 over 55s)    kubelet          Node old-k8s-version-894575 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    55s (x8 over 55s)    kubelet          Node old-k8s-version-894575 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     55s (x8 over 55s)    kubelet          Node old-k8s-version-894575 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           40s                  node-controller  Node old-k8s-version-894575 event: Registered Node old-k8s-version-894575 in Controller
	
	
	==> dmesg <==
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 02 bf cf fd 8a f3 08 06
	[  +0.000372] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 46 d7 50 f9 50 96 08 06
	[Dec17 19:26] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000011] ll header: 00000000: 12 b8 6e 1b fb 93 de a2 46 23 bd 1e 08 00
	[  +1.015318] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 12 b8 6e 1b fb 93 de a2 46 23 bd 1e 08 00
	[  +1.023837] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 12 b8 6e 1b fb 93 de a2 46 23 bd 1e 08 00
	[  +1.023872] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 12 b8 6e 1b fb 93 de a2 46 23 bd 1e 08 00
	[  +1.023881] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 12 b8 6e 1b fb 93 de a2 46 23 bd 1e 08 00
	[  +1.023899] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 12 b8 6e 1b fb 93 de a2 46 23 bd 1e 08 00
	[  +2.047807] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: 12 b8 6e 1b fb 93 de a2 46 23 bd 1e 08 00
	[  +4.031540] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000022] ll header: 00000000: 12 b8 6e 1b fb 93 de a2 46 23 bd 1e 08 00
	[  +8.319118] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: 12 b8 6e 1b fb 93 de a2 46 23 bd 1e 08 00
	[ +16.382218] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 12 b8 6e 1b fb 93 de a2 46 23 bd 1e 08 00
	[Dec17 19:27] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 12 b8 6e 1b fb 93 de a2 46 23 bd 1e 08 00
	
	
	==> etcd [71cce81b2a47a327a9532ef2473382c328c9042db27d9361ba053cc1855855f4] <==
	{"level":"info","ts":"2025-12-17T20:00:46.758421Z","caller":"traceutil/trace.go:171","msg":"trace[250100453] transaction","detail":"{read_only:false; response_revision:566; number_of_response:1; }","duration":"186.516699ms","start":"2025-12-17T20:00:46.571889Z","end":"2025-12-17T20:00:46.758406Z","steps":["trace[250100453] 'process raft request'  (duration: 186.411335ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-17T20:00:46.758492Z","caller":"traceutil/trace.go:171","msg":"trace[341412219] transaction","detail":"{read_only:false; response_revision:565; number_of_response:1; }","duration":"186.678933ms","start":"2025-12-17T20:00:46.571801Z","end":"2025-12-17T20:00:46.75848Z","steps":["trace[341412219] 'process raft request'  (duration: 186.333507ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-17T20:00:46.758538Z","caller":"traceutil/trace.go:171","msg":"trace[1109591681] transaction","detail":"{read_only:false; response_revision:568; number_of_response:1; }","duration":"186.0966ms","start":"2025-12-17T20:00:46.572427Z","end":"2025-12-17T20:00:46.758524Z","steps":["trace[1109591681] 'process raft request'  (duration: 185.962038ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-17T20:00:46.758567Z","caller":"traceutil/trace.go:171","msg":"trace[400570659] transaction","detail":"{read_only:false; response_revision:571; number_of_response:1; }","duration":"185.731669ms","start":"2025-12-17T20:00:46.572828Z","end":"2025-12-17T20:00:46.75856Z","steps":["trace[400570659] 'process raft request'  (duration: 185.68739ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-17T20:00:46.758677Z","caller":"traceutil/trace.go:171","msg":"trace[1954442103] transaction","detail":"{read_only:false; response_revision:569; number_of_response:1; }","duration":"186.120677ms","start":"2025-12-17T20:00:46.572545Z","end":"2025-12-17T20:00:46.758666Z","steps":["trace[1954442103] 'process raft request'  (duration: 185.884142ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-17T20:00:46.758686Z","caller":"traceutil/trace.go:171","msg":"trace[1708764686] transaction","detail":"{read_only:false; response_revision:567; number_of_response:1; }","duration":"186.724775ms","start":"2025-12-17T20:00:46.571935Z","end":"2025-12-17T20:00:46.75866Z","steps":["trace[1708764686] 'process raft request'  (duration: 186.426667ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-17T20:00:46.758701Z","caller":"traceutil/trace.go:171","msg":"trace[1773903274] transaction","detail":"{read_only:false; response_revision:570; number_of_response:1; }","duration":"186.07421ms","start":"2025-12-17T20:00:46.572617Z","end":"2025-12-17T20:00:46.758691Z","steps":["trace[1773903274] 'process raft request'  (duration: 185.850934ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-17T20:00:46.879784Z","caller":"traceutil/trace.go:171","msg":"trace[941785200] linearizableReadLoop","detail":"{readStateIndex:597; appliedIndex:596; }","duration":"116.010506ms","start":"2025-12-17T20:00:46.76376Z","end":"2025-12-17T20:00:46.879771Z","steps":["trace[941785200] 'read index received'  (duration: 113.834413ms)","trace[941785200] 'applied index is now lower than readState.Index'  (duration: 2.175477ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-17T20:00:46.879807Z","caller":"traceutil/trace.go:171","msg":"trace[394468456] transaction","detail":"{read_only:false; response_revision:572; number_of_response:1; }","duration":"117.040842ms","start":"2025-12-17T20:00:46.762749Z","end":"2025-12-17T20:00:46.879789Z","steps":["trace[394468456] 'process raft request'  (duration: 114.903255ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-17T20:00:46.879897Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"116.143024ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/limitranges/kubernetes-dashboard/\" range_end:\"/registry/limitranges/kubernetes-dashboard0\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-17T20:00:46.879918Z","caller":"traceutil/trace.go:171","msg":"trace[41405551] range","detail":"{range_begin:/registry/limitranges/kubernetes-dashboard/; range_end:/registry/limitranges/kubernetes-dashboard0; response_count:0; response_revision:572; }","duration":"116.180999ms","start":"2025-12-17T20:00:46.76373Z","end":"2025-12-17T20:00:46.879911Z","steps":["trace[41405551] 'agreement among raft nodes before linearized reading'  (duration: 116.107841ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-17T20:00:46.884773Z","caller":"traceutil/trace.go:171","msg":"trace[2090708288] transaction","detail":"{read_only:false; response_revision:573; number_of_response:1; }","duration":"120.846518ms","start":"2025-12-17T20:00:46.763911Z","end":"2025-12-17T20:00:46.884757Z","steps":["trace[2090708288] 'process raft request'  (duration: 120.672073ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-17T20:00:46.884904Z","caller":"traceutil/trace.go:171","msg":"trace[1689218853] transaction","detail":"{read_only:false; response_revision:575; number_of_response:1; }","duration":"117.886828ms","start":"2025-12-17T20:00:46.767002Z","end":"2025-12-17T20:00:46.884889Z","steps":["trace[1689218853] 'process raft request'  (duration: 117.716391ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-17T20:00:46.884941Z","caller":"traceutil/trace.go:171","msg":"trace[1989545465] transaction","detail":"{read_only:false; response_revision:574; number_of_response:1; }","duration":"120.970356ms","start":"2025-12-17T20:00:46.763953Z","end":"2025-12-17T20:00:46.884924Z","steps":["trace[1989545465] 'process raft request'  (duration: 120.733164ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-17T20:00:47.160917Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"174.855867ms","expected-duration":"100ms","prefix":"","request":"header:<ID:9722597792003784280 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf.1882192424dfb849\" mod_revision:0 > success:<request_put:<key:\"/registry/events/kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf.1882192424dfb849\" value_size:694 lease:499225755149008275 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2025-12-17T20:00:47.161037Z","caller":"traceutil/trace.go:171","msg":"trace[321171456] transaction","detail":"{read_only:false; response_revision:586; number_of_response:1; }","duration":"203.081194ms","start":"2025-12-17T20:00:46.957922Z","end":"2025-12-17T20:00:47.161003Z","steps":["trace[321171456] 'process raft request'  (duration: 28.082186ms)","trace[321171456] 'compare'  (duration: 174.757507ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-17T20:00:47.161116Z","caller":"traceutil/trace.go:171","msg":"trace[1884373908] linearizableReadLoop","detail":"{readStateIndex:611; appliedIndex:610; }","duration":"198.514965ms","start":"2025-12-17T20:00:46.962582Z","end":"2025-12-17T20:00:47.161097Z","steps":["trace[1884373908] 'read index received'  (duration: 23.432436ms)","trace[1884373908] 'applied index is now lower than readState.Index'  (duration: 175.079873ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-17T20:00:47.161129Z","caller":"traceutil/trace.go:171","msg":"trace[1329714290] transaction","detail":"{read_only:false; response_revision:587; number_of_response:1; }","duration":"198.090127ms","start":"2025-12-17T20:00:46.963026Z","end":"2025-12-17T20:00:47.161116Z","steps":["trace[1329714290] 'process raft request'  (duration: 197.977448ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-17T20:00:47.161204Z","caller":"traceutil/trace.go:171","msg":"trace[529436355] transaction","detail":"{read_only:false; response_revision:590; number_of_response:1; }","duration":"195.286099ms","start":"2025-12-17T20:00:46.96591Z","end":"2025-12-17T20:00:47.161196Z","steps":["trace[529436355] 'process raft request'  (duration: 195.252827ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-17T20:00:47.161285Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"198.71227ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kubernetes-dashboard/kubernetes-dashboard-8694d4445c-jb6px\" ","response":"range_response_count:1 size:2849"}
	{"level":"info","ts":"2025-12-17T20:00:47.161323Z","caller":"traceutil/trace.go:171","msg":"trace[1907353273] range","detail":"{range_begin:/registry/pods/kubernetes-dashboard/kubernetes-dashboard-8694d4445c-jb6px; range_end:; response_count:1; response_revision:590; }","duration":"198.761267ms","start":"2025-12-17T20:00:46.962552Z","end":"2025-12-17T20:00:47.161314Z","steps":["trace[1907353273] 'agreement among raft nodes before linearized reading'  (duration: 198.635483ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-17T20:00:47.161326Z","caller":"traceutil/trace.go:171","msg":"trace[2056318030] transaction","detail":"{read_only:false; response_revision:588; number_of_response:1; }","duration":"197.910452ms","start":"2025-12-17T20:00:46.963407Z","end":"2025-12-17T20:00:47.161318Z","steps":["trace[2056318030] 'process raft request'  (duration: 197.651321ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-17T20:00:47.161362Z","caller":"traceutil/trace.go:171","msg":"trace[934537175] transaction","detail":"{read_only:false; response_revision:589; number_of_response:1; }","duration":"195.436062ms","start":"2025-12-17T20:00:46.965913Z","end":"2025-12-17T20:00:47.161349Z","steps":["trace[934537175] 'process raft request'  (duration: 195.199309ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-17T20:00:47.161511Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"196.772273ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/deployments/kubernetes-dashboard/kubernetes-dashboard\" ","response":"range_response_count:1 size:4894"}
	{"level":"info","ts":"2025-12-17T20:00:47.161536Z","caller":"traceutil/trace.go:171","msg":"trace[742560136] range","detail":"{range_begin:/registry/deployments/kubernetes-dashboard/kubernetes-dashboard; range_end:; response_count:1; response_revision:590; }","duration":"196.805088ms","start":"2025-12-17T20:00:46.964724Z","end":"2025-12-17T20:00:47.161529Z","steps":["trace[742560136] 'agreement among raft nodes before linearized reading'  (duration: 196.733641ms)"],"step_count":1}
	
	
	==> kernel <==
	 20:01:26 up  1:43,  0 user,  load average: 3.48, 3.24, 2.33
	Linux old-k8s-version-894575 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [3f0565e2bdcd725f2a285b6794d9cb087b195ddb248255a1410193df892996c7] <==
	I1217 20:00:34.880801       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1217 20:00:34.881200       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1217 20:00:34.881439       1 main.go:148] setting mtu 1500 for CNI 
	I1217 20:00:34.881499       1 main.go:178] kindnetd IP family: "ipv4"
	I1217 20:00:34.881545       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-17T20:00:35Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1217 20:00:35.114742       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1217 20:00:35.176135       1 controller.go:381] "Waiting for informer caches to sync"
	I1217 20:00:35.176226       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1217 20:00:35.176406       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1217 20:00:35.577066       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1217 20:00:35.577191       1 metrics.go:72] Registering metrics
	I1217 20:00:35.577322       1 controller.go:711] "Syncing nftables rules"
	I1217 20:00:45.088169       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1217 20:00:45.088214       1 main.go:301] handling current node
	I1217 20:00:55.088269       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1217 20:00:55.088307       1 main.go:301] handling current node
	I1217 20:01:05.088305       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1217 20:01:05.088394       1 main.go:301] handling current node
	I1217 20:01:15.091232       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1217 20:01:15.091275       1 main.go:301] handling current node
	I1217 20:01:25.087954       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1217 20:01:25.088015       1 main.go:301] handling current node
	
	
	==> kube-apiserver [484a1e94925a1a7ea27bb0e8881ce92d0ba724ee5dc0be0b55aa22d4968fb0f9] <==
	I1217 20:00:33.901263       1 crd_finalizer.go:266] Starting CRDFinalizer
	I1217 20:00:33.981585       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1217 20:00:33.981652       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1217 20:00:33.985051       1 shared_informer.go:318] Caches are synced for configmaps
	I1217 20:00:33.985221       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1217 20:00:33.994290       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1217 20:00:33.995480       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1217 20:00:33.995695       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1217 20:00:34.001215       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1217 20:00:34.001298       1 aggregator.go:166] initial CRD sync complete...
	I1217 20:00:34.001309       1 autoregister_controller.go:141] Starting autoregister controller
	I1217 20:00:34.001316       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1217 20:00:34.001324       1 cache.go:39] Caches are synced for autoregister controller
	I1217 20:00:34.028411       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1217 20:00:34.903148       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1217 20:00:35.345509       1 controller.go:624] quota admission added evaluator for: namespaces
	I1217 20:00:35.394990       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1217 20:00:35.417435       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1217 20:00:35.425215       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1217 20:00:35.438802       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1217 20:00:35.488480       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.99.70.150"}
	I1217 20:00:35.502940       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.103.243.247"}
	I1217 20:00:46.570322       1 controller.go:624] quota admission added evaluator for: endpoints
	I1217 20:00:46.571402       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1217 20:00:46.572194       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [467ab50d14f76d9794b7546e57cbb0eec5d9291e092f5be7dae85296a7ea1b59] <==
	I1217 20:00:46.760679       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set kubernetes-dashboard-8694d4445c to 1"
	I1217 20:00:46.760705       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set dashboard-metrics-scraper-5f989dc9cf to 1"
	I1217 20:00:46.761337       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="104.022µs"
	I1217 20:00:46.808181       1 shared_informer.go:318] Caches are synced for garbage collector
	I1217 20:00:46.808212       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1217 20:00:46.817357       1 shared_informer.go:318] Caches are synced for garbage collector
	I1217 20:00:46.911036       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-8694d4445c-jb6px"
	I1217 20:00:46.911069       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-5f989dc9cf-5hjsp"
	I1217 20:00:46.961997       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="200.746754ms"
	I1217 20:00:46.962451       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="201.33235ms"
	I1217 20:00:47.163370       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="200.876149ms"
	I1217 20:00:47.163510       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="201.467112ms"
	I1217 20:00:47.163559       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="89.338µs"
	I1217 20:00:47.163580       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="47.035µs"
	I1217 20:00:47.175350       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="73.866µs"
	I1217 20:00:47.188128       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="89.901µs"
	I1217 20:00:50.292584       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="8.817926ms"
	I1217 20:00:50.292710       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="72.031µs"
	I1217 20:00:53.292745       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="67.55µs"
	I1217 20:00:54.295770       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="80.065µs"
	I1217 20:00:55.300142       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="176.961µs"
	I1217 20:01:09.299361       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="6.705559ms"
	I1217 20:01:09.299482       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="71.993µs"
	I1217 20:01:11.353450       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="93.738µs"
	I1217 20:01:17.276777       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="102.503µs"
	
	
	==> kube-proxy [71ddc80929603be65503dc71e856358367024bf67d78ffb6c1371882b159eff9] <==
	I1217 20:00:34.730661       1 server_others.go:69] "Using iptables proxy"
	I1217 20:00:34.747455       1 node.go:141] Successfully retrieved node IP: 192.168.85.2
	I1217 20:00:34.807177       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1217 20:00:34.810928       1 server_others.go:152] "Using iptables Proxier"
	I1217 20:00:34.810968       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1217 20:00:34.810976       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1217 20:00:34.811009       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1217 20:00:34.817360       1 server.go:846] "Version info" version="v1.28.0"
	I1217 20:00:34.817390       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1217 20:00:34.820266       1 config.go:315] "Starting node config controller"
	I1217 20:00:34.820293       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1217 20:00:34.820740       1 config.go:188] "Starting service config controller"
	I1217 20:00:34.820762       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1217 20:00:34.820787       1 config.go:97] "Starting endpoint slice config controller"
	I1217 20:00:34.820801       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1217 20:00:34.920643       1 shared_informer.go:318] Caches are synced for node config
	I1217 20:00:34.920939       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1217 20:00:34.921028       1 shared_informer.go:318] Caches are synced for service config
	
	
	==> kube-scheduler [80c6fccb8bdf5504ced354de5e08d38c6385613976d63820be5bf2822f675a3d] <==
	I1217 20:00:32.226283       1 serving.go:348] Generated self-signed cert in-memory
	W1217 20:00:33.971389       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1217 20:00:33.971446       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system": RBAC: [clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found]
	W1217 20:00:33.971480       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1217 20:00:33.971495       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1217 20:00:34.004448       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.0"
	I1217 20:00:34.004530       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1217 20:00:34.007123       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1217 20:00:34.007223       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1217 20:00:34.008209       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1217 20:00:34.008290       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1217 20:00:34.107424       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Dec 17 20:00:47 old-k8s-version-894575 kubelet[734]: I1217 20:00:47.128824     734 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/fc72ff5c-fb85-4431-a4f5-88e4e1f04888-tmp-volume\") pod \"kubernetes-dashboard-8694d4445c-jb6px\" (UID: \"fc72ff5c-fb85-4431-a4f5-88e4e1f04888\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-jb6px"
	Dec 17 20:00:47 old-k8s-version-894575 kubelet[734]: I1217 20:00:47.128888     734 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/7179f0cb-5d60-4b81-b4cb-c5e37566bc08-tmp-volume\") pod \"dashboard-metrics-scraper-5f989dc9cf-5hjsp\" (UID: \"7179f0cb-5d60-4b81-b4cb-c5e37566bc08\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-5hjsp"
	Dec 17 20:00:47 old-k8s-version-894575 kubelet[734]: I1217 20:00:47.128970     734 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m24xn\" (UniqueName: \"kubernetes.io/projected/fc72ff5c-fb85-4431-a4f5-88e4e1f04888-kube-api-access-m24xn\") pod \"kubernetes-dashboard-8694d4445c-jb6px\" (UID: \"fc72ff5c-fb85-4431-a4f5-88e4e1f04888\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-jb6px"
	Dec 17 20:00:47 old-k8s-version-894575 kubelet[734]: I1217 20:00:47.129006     734 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zfkfb\" (UniqueName: \"kubernetes.io/projected/7179f0cb-5d60-4b81-b4cb-c5e37566bc08-kube-api-access-zfkfb\") pod \"dashboard-metrics-scraper-5f989dc9cf-5hjsp\" (UID: \"7179f0cb-5d60-4b81-b4cb-c5e37566bc08\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-5hjsp"
	Dec 17 20:00:50 old-k8s-version-894575 kubelet[734]: I1217 20:00:50.283963     734 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-jb6px" podStartSLOduration=1.437734223 podCreationTimestamp="2025-12-17 20:00:46 +0000 UTC" firstStartedPulling="2025-12-17 20:00:47.28960755 +0000 UTC m=+16.215365773" lastFinishedPulling="2025-12-17 20:00:50.135754728 +0000 UTC m=+19.061512958" observedRunningTime="2025-12-17 20:00:50.283564089 +0000 UTC m=+19.209322322" watchObservedRunningTime="2025-12-17 20:00:50.283881408 +0000 UTC m=+19.209639641"
	Dec 17 20:00:53 old-k8s-version-894575 kubelet[734]: I1217 20:00:53.279180     734 scope.go:117] "RemoveContainer" containerID="c33fb87cb51628bc9612395483e504a89240391a0076300e43ff9e5c0a7be036"
	Dec 17 20:00:54 old-k8s-version-894575 kubelet[734]: I1217 20:00:54.283949     734 scope.go:117] "RemoveContainer" containerID="c33fb87cb51628bc9612395483e504a89240391a0076300e43ff9e5c0a7be036"
	Dec 17 20:00:54 old-k8s-version-894575 kubelet[734]: I1217 20:00:54.284172     734 scope.go:117] "RemoveContainer" containerID="66f27a1cc9b649019a571f7ba9e5a7ceb6356098743d0b857d825bd8df809387"
	Dec 17 20:00:54 old-k8s-version-894575 kubelet[734]: E1217 20:00:54.284510     734 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-5hjsp_kubernetes-dashboard(7179f0cb-5d60-4b81-b4cb-c5e37566bc08)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-5hjsp" podUID="7179f0cb-5d60-4b81-b4cb-c5e37566bc08"
	Dec 17 20:00:55 old-k8s-version-894575 kubelet[734]: I1217 20:00:55.288412     734 scope.go:117] "RemoveContainer" containerID="66f27a1cc9b649019a571f7ba9e5a7ceb6356098743d0b857d825bd8df809387"
	Dec 17 20:00:55 old-k8s-version-894575 kubelet[734]: E1217 20:00:55.288778     734 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-5hjsp_kubernetes-dashboard(7179f0cb-5d60-4b81-b4cb-c5e37566bc08)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-5hjsp" podUID="7179f0cb-5d60-4b81-b4cb-c5e37566bc08"
	Dec 17 20:00:57 old-k8s-version-894575 kubelet[734]: I1217 20:00:57.267231     734 scope.go:117] "RemoveContainer" containerID="66f27a1cc9b649019a571f7ba9e5a7ceb6356098743d0b857d825bd8df809387"
	Dec 17 20:00:57 old-k8s-version-894575 kubelet[734]: E1217 20:00:57.267519     734 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-5hjsp_kubernetes-dashboard(7179f0cb-5d60-4b81-b4cb-c5e37566bc08)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-5hjsp" podUID="7179f0cb-5d60-4b81-b4cb-c5e37566bc08"
	Dec 17 20:01:05 old-k8s-version-894575 kubelet[734]: I1217 20:01:05.313323     734 scope.go:117] "RemoveContainer" containerID="780e65a762a1065439990615f358a4208007b4713894463341d9a2f8f9b91b33"
	Dec 17 20:01:11 old-k8s-version-894575 kubelet[734]: I1217 20:01:11.188425     734 scope.go:117] "RemoveContainer" containerID="66f27a1cc9b649019a571f7ba9e5a7ceb6356098743d0b857d825bd8df809387"
	Dec 17 20:01:11 old-k8s-version-894575 kubelet[734]: I1217 20:01:11.334368     734 scope.go:117] "RemoveContainer" containerID="66f27a1cc9b649019a571f7ba9e5a7ceb6356098743d0b857d825bd8df809387"
	Dec 17 20:01:11 old-k8s-version-894575 kubelet[734]: I1217 20:01:11.334700     734 scope.go:117] "RemoveContainer" containerID="294d1768cc9371cf9e11f88d1708895d4e38b481f60bc8fc77e44ab1fb18b5ff"
	Dec 17 20:01:11 old-k8s-version-894575 kubelet[734]: E1217 20:01:11.335045     734 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-5hjsp_kubernetes-dashboard(7179f0cb-5d60-4b81-b4cb-c5e37566bc08)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-5hjsp" podUID="7179f0cb-5d60-4b81-b4cb-c5e37566bc08"
	Dec 17 20:01:17 old-k8s-version-894575 kubelet[734]: I1217 20:01:17.266601     734 scope.go:117] "RemoveContainer" containerID="294d1768cc9371cf9e11f88d1708895d4e38b481f60bc8fc77e44ab1fb18b5ff"
	Dec 17 20:01:17 old-k8s-version-894575 kubelet[734]: E1217 20:01:17.266890     734 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-5hjsp_kubernetes-dashboard(7179f0cb-5d60-4b81-b4cb-c5e37566bc08)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-5hjsp" podUID="7179f0cb-5d60-4b81-b4cb-c5e37566bc08"
	Dec 17 20:01:23 old-k8s-version-894575 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 17 20:01:23 old-k8s-version-894575 kubelet[734]: I1217 20:01:23.384547     734 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Dec 17 20:01:23 old-k8s-version-894575 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 17 20:01:23 old-k8s-version-894575 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 20:01:23 old-k8s-version-894575 systemd[1]: kubelet.service: Consumed 1.594s CPU time.
	
	
	==> kubernetes-dashboard [75a986f0ae8c399acd6a7e6fb4b4edd21dd8ecafde18a0e3734080cd5e518d63] <==
	2025/12/17 20:00:50 Using namespace: kubernetes-dashboard
	2025/12/17 20:00:50 Using in-cluster config to connect to apiserver
	2025/12/17 20:00:50 Using secret token for csrf signing
	2025/12/17 20:00:50 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/12/17 20:00:50 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/12/17 20:00:50 Successful initial request to the apiserver, version: v1.28.0
	2025/12/17 20:00:50 Generating JWE encryption key
	2025/12/17 20:00:50 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/12/17 20:00:50 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/12/17 20:00:50 Initializing JWE encryption key from synchronized object
	2025/12/17 20:00:50 Creating in-cluster Sidecar client
	2025/12/17 20:00:50 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/17 20:00:50 Serving insecurely on HTTP port: 9090
	2025/12/17 20:01:20 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/17 20:00:50 Starting overwatch
	
	
	==> storage-provisioner [464015c6e96083c6df4b19581746c43903d1b30015e9e8e6a22182712cc3e2da] <==
	I1217 20:01:05.381524       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1217 20:01:05.390175       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1217 20:01:05.390214       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1217 20:01:22.788587       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1217 20:01:22.788662       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"443e0966-e91f-456b-b43e-a7e2d61f2da7", APIVersion:"v1", ResourceVersion:"652", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-894575_dfdce6b5-33f1-4b4e-869f-53f9ae2d66d2 became leader
	I1217 20:01:22.788755       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-894575_dfdce6b5-33f1-4b4e-869f-53f9ae2d66d2!
	I1217 20:01:22.889009       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-894575_dfdce6b5-33f1-4b4e-869f-53f9ae2d66d2!
	
	
	==> storage-provisioner [780e65a762a1065439990615f358a4208007b4713894463341d9a2f8f9b91b33] <==
	I1217 20:00:34.656300       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1217 20:01:04.660666       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-894575 -n old-k8s-version-894575
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-894575 -n old-k8s-version-894575: exit status 2 (367.729854ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context old-k8s-version-894575 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect old-k8s-version-894575
helpers_test.go:244: (dbg) docker inspect old-k8s-version-894575:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "f5ebc1c53bc84c39ca57e291b3d376c12701623821efd7aa06f11ea9e9b21a6c",
	        "Created": "2025-12-17T19:59:10.569830275Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 625646,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-17T20:00:24.54970804Z",
	            "FinishedAt": "2025-12-17T20:00:23.607860294Z"
	        },
	        "Image": "sha256:e3abeb065413b7566dd42e98e204ab3ad174790743f1f5cd427036c11b49d7f1",
	        "ResolvConfPath": "/var/lib/docker/containers/f5ebc1c53bc84c39ca57e291b3d376c12701623821efd7aa06f11ea9e9b21a6c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/f5ebc1c53bc84c39ca57e291b3d376c12701623821efd7aa06f11ea9e9b21a6c/hostname",
	        "HostsPath": "/var/lib/docker/containers/f5ebc1c53bc84c39ca57e291b3d376c12701623821efd7aa06f11ea9e9b21a6c/hosts",
	        "LogPath": "/var/lib/docker/containers/f5ebc1c53bc84c39ca57e291b3d376c12701623821efd7aa06f11ea9e9b21a6c/f5ebc1c53bc84c39ca57e291b3d376c12701623821efd7aa06f11ea9e9b21a6c-json.log",
	        "Name": "/old-k8s-version-894575",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-894575:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-894575",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "f5ebc1c53bc84c39ca57e291b3d376c12701623821efd7aa06f11ea9e9b21a6c",
	                "LowerDir": "/var/lib/docker/overlay2/cf0c071fa6be4c9c271a4ed41c01c193473d129d1f0cbb58862fb849a662aa72-init/diff:/var/lib/docker/overlay2/29727d664a8119dcd8d22d923cfdfa7d86f99088879bf2a113d907b51116eb38/diff",
	                "MergedDir": "/var/lib/docker/overlay2/cf0c071fa6be4c9c271a4ed41c01c193473d129d1f0cbb58862fb849a662aa72/merged",
	                "UpperDir": "/var/lib/docker/overlay2/cf0c071fa6be4c9c271a4ed41c01c193473d129d1f0cbb58862fb849a662aa72/diff",
	                "WorkDir": "/var/lib/docker/overlay2/cf0c071fa6be4c9c271a4ed41c01c193473d129d1f0cbb58862fb849a662aa72/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-894575",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-894575/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-894575",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-894575",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-894575",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "8cac54020272609db5b8b6033223539b316bb31e69b928e889a7e91959c5216b",
	            "SandboxKey": "/var/run/docker/netns/8cac54020272",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33448"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33449"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33452"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33450"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33451"
	                    }
	                ]
	            },
	            "Networks": {
	                "old-k8s-version-894575": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "f0ce1019d98582b4ef902421b21faaa999552d06bbfa4979e1d39a9d27bb73b1",
	                    "EndpointID": "1d28ac80344ae61aefed057e50ab31c5de69173bd1ee4899d222a99d440f22b6",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "MacAddress": "46:5a:2e:8c:ff:ad",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-894575",
	                        "f5ebc1c53bc8"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-894575 -n old-k8s-version-894575
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-894575 -n old-k8s-version-894575: exit status 2 (353.677864ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-894575 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-894575 logs -n 25: (1.246861909s)
helpers_test.go:261: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ start   │ -p NoKubernetes-327438 --driver=docker  --container-runtime=crio                                                                                                                                                                              │ NoKubernetes-327438          │ jenkins │ v1.37.0 │ 17 Dec 25 19:58 UTC │ 17 Dec 25 19:59 UTC │
	│ ssh     │ cert-options-997440 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                   │ cert-options-997440          │ jenkins │ v1.37.0 │ 17 Dec 25 19:59 UTC │ 17 Dec 25 19:59 UTC │
	│ ssh     │ -p cert-options-997440 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                 │ cert-options-997440          │ jenkins │ v1.37.0 │ 17 Dec 25 19:59 UTC │ 17 Dec 25 19:59 UTC │
	│ delete  │ -p cert-options-997440                                                                                                                                                                                                                        │ cert-options-997440          │ jenkins │ v1.37.0 │ 17 Dec 25 19:59 UTC │ 17 Dec 25 19:59 UTC │
	│ ssh     │ -p NoKubernetes-327438 sudo systemctl is-active --quiet service kubelet                                                                                                                                                                       │ NoKubernetes-327438          │ jenkins │ v1.37.0 │ 17 Dec 25 19:59 UTC │                     │
	│ start   │ -p old-k8s-version-894575 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-894575       │ jenkins │ v1.37.0 │ 17 Dec 25 19:59 UTC │ 17 Dec 25 19:59 UTC │
	│ delete  │ -p NoKubernetes-327438                                                                                                                                                                                                                        │ NoKubernetes-327438          │ jenkins │ v1.37.0 │ 17 Dec 25 19:59 UTC │ 17 Dec 25 19:59 UTC │
	│ delete  │ -p disable-driver-mounts-890254                                                                                                                                                                                                               │ disable-driver-mounts-890254 │ jenkins │ v1.37.0 │ 17 Dec 25 19:59 UTC │ 17 Dec 25 19:59 UTC │
	│ start   │ -p no-preload-832842 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1                                                                                  │ no-preload-832842            │ jenkins │ v1.37.0 │ 17 Dec 25 19:59 UTC │ 17 Dec 25 19:59 UTC │
	│ addons  │ enable metrics-server -p no-preload-832842 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-832842            │ jenkins │ v1.37.0 │ 17 Dec 25 20:00 UTC │                     │
	│ stop    │ -p no-preload-832842 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-832842            │ jenkins │ v1.37.0 │ 17 Dec 25 20:00 UTC │ 17 Dec 25 20:00 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-894575 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-894575       │ jenkins │ v1.37.0 │ 17 Dec 25 20:00 UTC │                     │
	│ stop    │ -p old-k8s-version-894575 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-894575       │ jenkins │ v1.37.0 │ 17 Dec 25 20:00 UTC │ 17 Dec 25 20:00 UTC │
	│ addons  │ enable dashboard -p no-preload-832842 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-832842            │ jenkins │ v1.37.0 │ 17 Dec 25 20:00 UTC │ 17 Dec 25 20:00 UTC │
	│ start   │ -p no-preload-832842 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1                                                                                  │ no-preload-832842            │ jenkins │ v1.37.0 │ 17 Dec 25 20:00 UTC │ 17 Dec 25 20:01 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-894575 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-894575       │ jenkins │ v1.37.0 │ 17 Dec 25 20:00 UTC │ 17 Dec 25 20:00 UTC │
	│ start   │ -p old-k8s-version-894575 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-894575       │ jenkins │ v1.37.0 │ 17 Dec 25 20:00 UTC │ 17 Dec 25 20:01 UTC │
	│ start   │ -p cert-expiration-059470 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-059470       │ jenkins │ v1.37.0 │ 17 Dec 25 20:00 UTC │ 17 Dec 25 20:00 UTC │
	│ delete  │ -p cert-expiration-059470                                                                                                                                                                                                                     │ cert-expiration-059470       │ jenkins │ v1.37.0 │ 17 Dec 25 20:00 UTC │ 17 Dec 25 20:00 UTC │
	│ start   │ -p default-k8s-diff-port-759234 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3                                                                      │ default-k8s-diff-port-759234 │ jenkins │ v1.37.0 │ 17 Dec 25 20:00 UTC │ 17 Dec 25 20:01 UTC │
	│ image   │ no-preload-832842 image list --format=json                                                                                                                                                                                                    │ no-preload-832842            │ jenkins │ v1.37.0 │ 17 Dec 25 20:01 UTC │ 17 Dec 25 20:01 UTC │
	│ pause   │ -p no-preload-832842 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-832842            │ jenkins │ v1.37.0 │ 17 Dec 25 20:01 UTC │                     │
	│ image   │ old-k8s-version-894575 image list --format=json                                                                                                                                                                                               │ old-k8s-version-894575       │ jenkins │ v1.37.0 │ 17 Dec 25 20:01 UTC │ 17 Dec 25 20:01 UTC │
	│ pause   │ -p old-k8s-version-894575 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-894575       │ jenkins │ v1.37.0 │ 17 Dec 25 20:01 UTC │                     │
	│ delete  │ -p no-preload-832842                                                                                                                                                                                                                          │ no-preload-832842            │ jenkins │ v1.37.0 │ 17 Dec 25 20:01 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/17 20:00:42
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1217 20:00:42.430475  631473 out.go:360] Setting OutFile to fd 1 ...
	I1217 20:00:42.430717  631473 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 20:00:42.430725  631473 out.go:374] Setting ErrFile to fd 2...
	I1217 20:00:42.430734  631473 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 20:00:42.430932  631473 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22186-372245/.minikube/bin
	I1217 20:00:42.431484  631473 out.go:368] Setting JSON to false
	I1217 20:00:42.432651  631473 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":6193,"bootTime":1765995449,"procs":333,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1217 20:00:42.432716  631473 start.go:143] virtualization: kvm guest
	I1217 20:00:42.434554  631473 out.go:179] * [default-k8s-diff-port-759234] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1217 20:00:42.436272  631473 out.go:179]   - MINIKUBE_LOCATION=22186
	I1217 20:00:42.436339  631473 notify.go:221] Checking for updates...
	I1217 20:00:42.438673  631473 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 20:00:42.439791  631473 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22186-372245/kubeconfig
	I1217 20:00:42.444253  631473 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22186-372245/.minikube
	I1217 20:00:42.445569  631473 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1217 20:00:42.446765  631473 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 20:00:42.448395  631473 config.go:182] Loaded profile config "kubernetes-upgrade-322567": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1217 20:00:42.448504  631473 config.go:182] Loaded profile config "no-preload-832842": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1217 20:00:42.448574  631473 config.go:182] Loaded profile config "old-k8s-version-894575": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1217 20:00:42.448676  631473 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 20:00:42.473152  631473 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1217 20:00:42.473303  631473 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 20:00:42.530715  631473 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:76 SystemTime:2025-12-17 20:00:42.520326347 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1217 20:00:42.530839  631473 docker.go:319] overlay module found
	I1217 20:00:42.533607  631473 out.go:179] * Using the docker driver based on user configuration
	I1217 20:00:42.534900  631473 start.go:309] selected driver: docker
	I1217 20:00:42.534931  631473 start.go:927] validating driver "docker" against <nil>
	I1217 20:00:42.534945  631473 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 20:00:42.535594  631473 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 20:00:42.593983  631473 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:76 SystemTime:2025-12-17 20:00:42.584279589 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1217 20:00:42.594185  631473 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1217 20:00:42.594402  631473 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1217 20:00:42.596050  631473 out.go:179] * Using Docker driver with root privileges
	I1217 20:00:42.597217  631473 cni.go:84] Creating CNI manager for ""
	I1217 20:00:42.597290  631473 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1217 20:00:42.597303  631473 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1217 20:00:42.597383  631473 start.go:353] cluster config:
	{Name:default-k8s-diff-port-759234 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:default-k8s-diff-port-759234 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SS
HAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 20:00:42.599022  631473 out.go:179] * Starting "default-k8s-diff-port-759234" primary control-plane node in "default-k8s-diff-port-759234" cluster
	I1217 20:00:42.600540  631473 cache.go:134] Beginning downloading kic base image for docker with crio
	I1217 20:00:42.601819  631473 out.go:179] * Pulling base image v0.0.48-1765966054-22186 ...
	I1217 20:00:42.603027  631473 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1217 20:00:42.603089  631473 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22186-372245/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4
	I1217 20:00:42.603104  631473 cache.go:65] Caching tarball of preloaded images
	I1217 20:00:42.603158  631473 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 in local docker daemon
	I1217 20:00:42.603241  631473 preload.go:238] Found /home/jenkins/minikube-integration/22186-372245/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1217 20:00:42.603255  631473 cache.go:68] Finished verifying existence of preloaded tar for v1.34.3 on crio
	I1217 20:00:42.603409  631473 profile.go:143] Saving config to /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/default-k8s-diff-port-759234/config.json ...
	I1217 20:00:42.603441  631473 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/default-k8s-diff-port-759234/config.json: {Name:mka62982d045e5cb058ac77025f345457b6a6373 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 20:00:42.624544  631473 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 in local docker daemon, skipping pull
	I1217 20:00:42.624564  631473 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 exists in daemon, skipping load
	I1217 20:00:42.624587  631473 cache.go:243] Successfully downloaded all kic artifacts
	I1217 20:00:42.624618  631473 start.go:360] acquireMachinesLock for default-k8s-diff-port-759234: {Name:mk173016aaa355dafae1bd5727aae1037817b426 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 20:00:42.624714  631473 start.go:364] duration metric: took 77.83µs to acquireMachinesLock for "default-k8s-diff-port-759234"
	I1217 20:00:42.624738  631473 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-759234 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:default-k8s-diff-port-759234 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:
false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1217 20:00:42.624812  631473 start.go:125] createHost starting for "" (driver="docker")
	W1217 20:00:39.572913  625400 pod_ready.go:104] pod "coredns-5dd5756b68-gbhs5" is not "Ready", error: <nil>
	W1217 20:00:42.072117  625400 pod_ready.go:104] pod "coredns-5dd5756b68-gbhs5" is not "Ready", error: <nil>
	W1217 20:00:44.072432  625400 pod_ready.go:104] pod "coredns-5dd5756b68-gbhs5" is not "Ready", error: <nil>
	W1217 20:00:42.104752  624471 pod_ready.go:104] pod "coredns-7d764666f9-988jw" is not "Ready", error: <nil>
	W1217 20:00:44.105460  624471 pod_ready.go:104] pod "coredns-7d764666f9-988jw" is not "Ready", error: <nil>
	I1217 20:00:44.011034  596882 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1217 20:00:44.011594  596882 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 20:00:44.011658  596882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 20:00:44.011708  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 20:00:44.044351  596882 cri.go:89] found id: "6822d1aff73905867cd00c8bd3d996a8d98a37c238f53bab351d576f0d6b34fc"
	I1217 20:00:44.044381  596882 cri.go:89] found id: ""
	I1217 20:00:44.044394  596882 logs.go:282] 1 containers: [6822d1aff73905867cd00c8bd3d996a8d98a37c238f53bab351d576f0d6b34fc]
	I1217 20:00:44.044463  596882 ssh_runner.go:195] Run: which crictl
	I1217 20:00:44.049338  596882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 20:00:44.049428  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 20:00:44.080283  596882 cri.go:89] found id: ""
	I1217 20:00:44.080314  596882 logs.go:282] 0 containers: []
	W1217 20:00:44.080326  596882 logs.go:284] No container was found matching "etcd"
	I1217 20:00:44.080337  596882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 20:00:44.080404  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 20:00:44.113789  596882 cri.go:89] found id: ""
	I1217 20:00:44.113818  596882 logs.go:282] 0 containers: []
	W1217 20:00:44.113829  596882 logs.go:284] No container was found matching "coredns"
	I1217 20:00:44.113835  596882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 20:00:44.113889  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 20:00:44.146485  596882 cri.go:89] found id: "26afbca819064c614a7c269e4fbe3f73beb12920c9989c7a9adca8a87b8aee29"
	I1217 20:00:44.146516  596882 cri.go:89] found id: ""
	I1217 20:00:44.146529  596882 logs.go:282] 1 containers: [26afbca819064c614a7c269e4fbe3f73beb12920c9989c7a9adca8a87b8aee29]
	I1217 20:00:44.146598  596882 ssh_runner.go:195] Run: which crictl
	I1217 20:00:44.150860  596882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 20:00:44.150933  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 20:00:44.180612  596882 cri.go:89] found id: ""
	I1217 20:00:44.180648  596882 logs.go:282] 0 containers: []
	W1217 20:00:44.180661  596882 logs.go:284] No container was found matching "kube-proxy"
	I1217 20:00:44.180669  596882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 20:00:44.180733  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 20:00:44.215315  596882 cri.go:89] found id: "deb0ef3d09cc535bcd10a8ecc98a8afc0243fdcaf4256b36cc91b5d3e2c3810c"
	I1217 20:00:44.215341  596882 cri.go:89] found id: ""
	I1217 20:00:44.215351  596882 logs.go:282] 1 containers: [deb0ef3d09cc535bcd10a8ecc98a8afc0243fdcaf4256b36cc91b5d3e2c3810c]
	I1217 20:00:44.215410  596882 ssh_runner.go:195] Run: which crictl
	I1217 20:00:44.219707  596882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 20:00:44.219792  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 20:00:44.250358  596882 cri.go:89] found id: ""
	I1217 20:00:44.250390  596882 logs.go:282] 0 containers: []
	W1217 20:00:44.250402  596882 logs.go:284] No container was found matching "kindnet"
	I1217 20:00:44.250410  596882 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1217 20:00:44.250480  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 20:00:44.279599  596882 cri.go:89] found id: ""
	I1217 20:00:44.279629  596882 logs.go:282] 0 containers: []
	W1217 20:00:44.279639  596882 logs.go:284] No container was found matching "storage-provisioner"
	I1217 20:00:44.279654  596882 logs.go:123] Gathering logs for kubelet ...
	I1217 20:00:44.279673  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 20:00:44.366299  596882 logs.go:123] Gathering logs for dmesg ...
	I1217 20:00:44.366333  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 20:00:44.383253  596882 logs.go:123] Gathering logs for describe nodes ...
	I1217 20:00:44.383288  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 20:00:44.442881  596882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 20:00:44.442906  596882 logs.go:123] Gathering logs for kube-apiserver [6822d1aff73905867cd00c8bd3d996a8d98a37c238f53bab351d576f0d6b34fc] ...
	I1217 20:00:44.442929  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6822d1aff73905867cd00c8bd3d996a8d98a37c238f53bab351d576f0d6b34fc"
	I1217 20:00:44.483060  596882 logs.go:123] Gathering logs for kube-scheduler [26afbca819064c614a7c269e4fbe3f73beb12920c9989c7a9adca8a87b8aee29] ...
	I1217 20:00:44.483124  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 26afbca819064c614a7c269e4fbe3f73beb12920c9989c7a9adca8a87b8aee29"
	I1217 20:00:44.514331  596882 logs.go:123] Gathering logs for kube-controller-manager [deb0ef3d09cc535bcd10a8ecc98a8afc0243fdcaf4256b36cc91b5d3e2c3810c] ...
	I1217 20:00:44.514367  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 deb0ef3d09cc535bcd10a8ecc98a8afc0243fdcaf4256b36cc91b5d3e2c3810c"
	I1217 20:00:44.542722  596882 logs.go:123] Gathering logs for CRI-O ...
	I1217 20:00:44.542760  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 20:00:44.590351  596882 logs.go:123] Gathering logs for container status ...
	I1217 20:00:44.590389  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 20:00:47.127294  596882 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1217 20:00:47.127787  596882 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 20:00:47.127853  596882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 20:00:47.127918  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 20:00:47.156370  596882 cri.go:89] found id: "6822d1aff73905867cd00c8bd3d996a8d98a37c238f53bab351d576f0d6b34fc"
	I1217 20:00:47.156396  596882 cri.go:89] found id: ""
	I1217 20:00:47.156404  596882 logs.go:282] 1 containers: [6822d1aff73905867cd00c8bd3d996a8d98a37c238f53bab351d576f0d6b34fc]
	I1217 20:00:47.156460  596882 ssh_runner.go:195] Run: which crictl
	I1217 20:00:47.160516  596882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 20:00:47.160594  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 20:00:47.195038  596882 cri.go:89] found id: ""
	I1217 20:00:47.195068  596882 logs.go:282] 0 containers: []
	W1217 20:00:47.195137  596882 logs.go:284] No container was found matching "etcd"
	I1217 20:00:47.195143  596882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 20:00:47.195196  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 20:00:47.226808  596882 cri.go:89] found id: ""
	I1217 20:00:47.226835  596882 logs.go:282] 0 containers: []
	W1217 20:00:47.226845  596882 logs.go:284] No container was found matching "coredns"
	I1217 20:00:47.226851  596882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 20:00:47.226903  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 20:00:42.626516  631473 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1217 20:00:42.626787  631473 start.go:159] libmachine.API.Create for "default-k8s-diff-port-759234" (driver="docker")
	I1217 20:00:42.626819  631473 client.go:173] LocalClient.Create starting
	I1217 20:00:42.626888  631473 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22186-372245/.minikube/certs/ca.pem
	I1217 20:00:42.626923  631473 main.go:143] libmachine: Decoding PEM data...
	I1217 20:00:42.626942  631473 main.go:143] libmachine: Parsing certificate...
	I1217 20:00:42.626999  631473 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22186-372245/.minikube/certs/cert.pem
	I1217 20:00:42.627020  631473 main.go:143] libmachine: Decoding PEM data...
	I1217 20:00:42.627031  631473 main.go:143] libmachine: Parsing certificate...
	I1217 20:00:42.627386  631473 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-759234 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1217 20:00:42.645356  631473 cli_runner.go:211] docker network inspect default-k8s-diff-port-759234 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1217 20:00:42.645431  631473 network_create.go:284] running [docker network inspect default-k8s-diff-port-759234] to gather additional debugging logs...
	I1217 20:00:42.645452  631473 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-759234
	W1217 20:00:42.662433  631473 cli_runner.go:211] docker network inspect default-k8s-diff-port-759234 returned with exit code 1
	I1217 20:00:42.662463  631473 network_create.go:287] error running [docker network inspect default-k8s-diff-port-759234]: docker network inspect default-k8s-diff-port-759234: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network default-k8s-diff-port-759234 not found
	I1217 20:00:42.662486  631473 network_create.go:289] output of [docker network inspect default-k8s-diff-port-759234]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network default-k8s-diff-port-759234 not found
	
	** /stderr **
	I1217 20:00:42.662577  631473 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1217 20:00:42.680765  631473 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-f64340259533 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:f6:0a:32:70:0d:35} reservation:<nil>}
	I1217 20:00:42.681557  631473 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-67abe6566c60 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:42:82:43:08:7c:e3} reservation:<nil>}
	I1217 20:00:42.682052  631473 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-f76d03f2ebfd IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:8e:bb:9b:fb:af:46} reservation:<nil>}
	I1217 20:00:42.682584  631473 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-4c731e2a052d IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:4e:e6:a7:52:2c:69} reservation:<nil>}
	I1217 20:00:42.683304  631473 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-f0ce1019d985 IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:26:5a:f7:51:9a:55} reservation:<nil>}
	I1217 20:00:42.684136  631473 network.go:206] using free private subnet 192.168.94.0/24: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001f4b420}
	I1217 20:00:42.684173  631473 network_create.go:124] attempt to create docker network default-k8s-diff-port-759234 192.168.94.0/24 with gateway 192.168.94.1 and MTU of 1500 ...
	I1217 20:00:42.684252  631473 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=default-k8s-diff-port-759234 default-k8s-diff-port-759234
	I1217 20:00:42.733976  631473 network_create.go:108] docker network default-k8s-diff-port-759234 192.168.94.0/24 created
	I1217 20:00:42.734006  631473 kic.go:121] calculated static IP "192.168.94.2" for the "default-k8s-diff-port-759234" container
	I1217 20:00:42.734062  631473 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1217 20:00:42.752583  631473 cli_runner.go:164] Run: docker volume create default-k8s-diff-port-759234 --label name.minikube.sigs.k8s.io=default-k8s-diff-port-759234 --label created_by.minikube.sigs.k8s.io=true
	I1217 20:00:42.773596  631473 oci.go:103] Successfully created a docker volume default-k8s-diff-port-759234
	I1217 20:00:42.773686  631473 cli_runner.go:164] Run: docker run --rm --name default-k8s-diff-port-759234-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-759234 --entrypoint /usr/bin/test -v default-k8s-diff-port-759234:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 -d /var/lib
	I1217 20:00:43.205798  631473 oci.go:107] Successfully prepared a docker volume default-k8s-diff-port-759234
	I1217 20:00:43.205868  631473 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1217 20:00:43.205880  631473 kic.go:194] Starting extracting preloaded images to volume ...
	I1217 20:00:43.205970  631473 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22186-372245/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-759234:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 -I lz4 -xf /preloaded.tar -C /extractDir
	I1217 20:00:47.198577  631473 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22186-372245/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-759234:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 -I lz4 -xf /preloaded.tar -C /extractDir: (3.992562765s)
	I1217 20:00:47.198609  631473 kic.go:203] duration metric: took 3.992725296s to extract preloaded images to volume ...
	W1217 20:00:47.198694  631473 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1217 20:00:47.198723  631473 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1217 20:00:47.198767  631473 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1217 20:00:47.260923  631473 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname default-k8s-diff-port-759234 --name default-k8s-diff-port-759234 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-759234 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=default-k8s-diff-port-759234 --network default-k8s-diff-port-759234 --ip 192.168.94.2 --volume default-k8s-diff-port-759234:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8444 --publish=127.0.0.1::8444 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0
	W1217 20:00:46.572829  625400 pod_ready.go:104] pod "coredns-5dd5756b68-gbhs5" is not "Ready", error: <nil>
	W1217 20:00:49.072264  625400 pod_ready.go:104] pod "coredns-5dd5756b68-gbhs5" is not "Ready", error: <nil>
	W1217 20:00:46.605455  624471 pod_ready.go:104] pod "coredns-7d764666f9-988jw" is not "Ready", error: <nil>
	W1217 20:00:49.104308  624471 pod_ready.go:104] pod "coredns-7d764666f9-988jw" is not "Ready", error: <nil>
	I1217 20:00:47.261698  596882 cri.go:89] found id: "26afbca819064c614a7c269e4fbe3f73beb12920c9989c7a9adca8a87b8aee29"
	I1217 20:00:47.261722  596882 cri.go:89] found id: ""
	I1217 20:00:47.261733  596882 logs.go:282] 1 containers: [26afbca819064c614a7c269e4fbe3f73beb12920c9989c7a9adca8a87b8aee29]
	I1217 20:00:47.261790  596882 ssh_runner.go:195] Run: which crictl
	I1217 20:00:47.267357  596882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 20:00:47.267438  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 20:00:47.306726  596882 cri.go:89] found id: ""
	I1217 20:00:47.306759  596882 logs.go:282] 0 containers: []
	W1217 20:00:47.306770  596882 logs.go:284] No container was found matching "kube-proxy"
	I1217 20:00:47.306778  596882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 20:00:47.306842  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 20:00:47.340875  596882 cri.go:89] found id: "deb0ef3d09cc535bcd10a8ecc98a8afc0243fdcaf4256b36cc91b5d3e2c3810c"
	I1217 20:00:47.340912  596882 cri.go:89] found id: ""
	I1217 20:00:47.340924  596882 logs.go:282] 1 containers: [deb0ef3d09cc535bcd10a8ecc98a8afc0243fdcaf4256b36cc91b5d3e2c3810c]
	I1217 20:00:47.341135  596882 ssh_runner.go:195] Run: which crictl
	I1217 20:00:47.345736  596882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 20:00:47.345806  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 20:00:47.376962  596882 cri.go:89] found id: ""
	I1217 20:00:47.377012  596882 logs.go:282] 0 containers: []
	W1217 20:00:47.377025  596882 logs.go:284] No container was found matching "kindnet"
	I1217 20:00:47.377032  596882 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1217 20:00:47.377124  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 20:00:47.407325  596882 cri.go:89] found id: ""
	I1217 20:00:47.407359  596882 logs.go:282] 0 containers: []
	W1217 20:00:47.407374  596882 logs.go:284] No container was found matching "storage-provisioner"
	I1217 20:00:47.407387  596882 logs.go:123] Gathering logs for describe nodes ...
	I1217 20:00:47.407408  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 20:00:47.473703  596882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 20:00:47.473725  596882 logs.go:123] Gathering logs for kube-apiserver [6822d1aff73905867cd00c8bd3d996a8d98a37c238f53bab351d576f0d6b34fc] ...
	I1217 20:00:47.473743  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6822d1aff73905867cd00c8bd3d996a8d98a37c238f53bab351d576f0d6b34fc"
	I1217 20:00:47.508764  596882 logs.go:123] Gathering logs for kube-scheduler [26afbca819064c614a7c269e4fbe3f73beb12920c9989c7a9adca8a87b8aee29] ...
	I1217 20:00:47.508811  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 26afbca819064c614a7c269e4fbe3f73beb12920c9989c7a9adca8a87b8aee29"
	I1217 20:00:47.539065  596882 logs.go:123] Gathering logs for kube-controller-manager [deb0ef3d09cc535bcd10a8ecc98a8afc0243fdcaf4256b36cc91b5d3e2c3810c] ...
	I1217 20:00:47.539113  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 deb0ef3d09cc535bcd10a8ecc98a8afc0243fdcaf4256b36cc91b5d3e2c3810c"
	I1217 20:00:47.571543  596882 logs.go:123] Gathering logs for CRI-O ...
	I1217 20:00:47.571587  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 20:00:47.643416  596882 logs.go:123] Gathering logs for container status ...
	I1217 20:00:47.643456  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 20:00:47.689273  596882 logs.go:123] Gathering logs for kubelet ...
	I1217 20:00:47.689316  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 20:00:47.823222  596882 logs.go:123] Gathering logs for dmesg ...
	I1217 20:00:47.823260  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 20:00:50.347237  596882 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1217 20:00:50.347659  596882 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 20:00:50.347717  596882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 20:00:50.348197  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 20:00:50.391187  596882 cri.go:89] found id: "6822d1aff73905867cd00c8bd3d996a8d98a37c238f53bab351d576f0d6b34fc"
	I1217 20:00:50.391339  596882 cri.go:89] found id: ""
	I1217 20:00:50.391419  596882 logs.go:282] 1 containers: [6822d1aff73905867cd00c8bd3d996a8d98a37c238f53bab351d576f0d6b34fc]
	I1217 20:00:50.391505  596882 ssh_runner.go:195] Run: which crictl
	I1217 20:00:50.396902  596882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 20:00:50.397015  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 20:00:50.441286  596882 cri.go:89] found id: ""
	I1217 20:00:50.441360  596882 logs.go:282] 0 containers: []
	W1217 20:00:50.441373  596882 logs.go:284] No container was found matching "etcd"
	I1217 20:00:50.441389  596882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 20:00:50.441452  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 20:00:50.479045  596882 cri.go:89] found id: ""
	I1217 20:00:50.479088  596882 logs.go:282] 0 containers: []
	W1217 20:00:50.479100  596882 logs.go:284] No container was found matching "coredns"
	I1217 20:00:50.479108  596882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 20:00:50.479174  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 20:00:50.515926  596882 cri.go:89] found id: "26afbca819064c614a7c269e4fbe3f73beb12920c9989c7a9adca8a87b8aee29"
	I1217 20:00:50.516275  596882 cri.go:89] found id: ""
	I1217 20:00:50.516295  596882 logs.go:282] 1 containers: [26afbca819064c614a7c269e4fbe3f73beb12920c9989c7a9adca8a87b8aee29]
	I1217 20:00:50.516365  596882 ssh_runner.go:195] Run: which crictl
	I1217 20:00:50.522153  596882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 20:00:50.522238  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 20:00:50.562124  596882 cri.go:89] found id: ""
	I1217 20:00:50.562187  596882 logs.go:282] 0 containers: []
	W1217 20:00:50.562199  596882 logs.go:284] No container was found matching "kube-proxy"
	I1217 20:00:50.562208  596882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 20:00:50.562277  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 20:00:50.601222  596882 cri.go:89] found id: "deb0ef3d09cc535bcd10a8ecc98a8afc0243fdcaf4256b36cc91b5d3e2c3810c"
	I1217 20:00:50.601377  596882 cri.go:89] found id: ""
	I1217 20:00:50.601396  596882 logs.go:282] 1 containers: [deb0ef3d09cc535bcd10a8ecc98a8afc0243fdcaf4256b36cc91b5d3e2c3810c]
	I1217 20:00:50.601522  596882 ssh_runner.go:195] Run: which crictl
	I1217 20:00:50.607093  596882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 20:00:50.607179  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 20:00:50.643677  596882 cri.go:89] found id: ""
	I1217 20:00:50.643709  596882 logs.go:282] 0 containers: []
	W1217 20:00:50.643725  596882 logs.go:284] No container was found matching "kindnet"
	I1217 20:00:50.643734  596882 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1217 20:00:50.643810  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 20:00:50.683346  596882 cri.go:89] found id: ""
	I1217 20:00:50.683378  596882 logs.go:282] 0 containers: []
	W1217 20:00:50.683389  596882 logs.go:284] No container was found matching "storage-provisioner"
	I1217 20:00:50.683402  596882 logs.go:123] Gathering logs for kubelet ...
	I1217 20:00:50.683418  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 20:00:50.807284  596882 logs.go:123] Gathering logs for dmesg ...
	I1217 20:00:50.807323  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 20:00:50.829965  596882 logs.go:123] Gathering logs for describe nodes ...
	I1217 20:00:50.830005  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 20:00:50.903560  596882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 20:00:50.903583  596882 logs.go:123] Gathering logs for kube-apiserver [6822d1aff73905867cd00c8bd3d996a8d98a37c238f53bab351d576f0d6b34fc] ...
	I1217 20:00:50.903608  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6822d1aff73905867cd00c8bd3d996a8d98a37c238f53bab351d576f0d6b34fc"
	I1217 20:00:50.952336  596882 logs.go:123] Gathering logs for kube-scheduler [26afbca819064c614a7c269e4fbe3f73beb12920c9989c7a9adca8a87b8aee29] ...
	I1217 20:00:50.952375  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 26afbca819064c614a7c269e4fbe3f73beb12920c9989c7a9adca8a87b8aee29"
	I1217 20:00:50.986508  596882 logs.go:123] Gathering logs for kube-controller-manager [deb0ef3d09cc535bcd10a8ecc98a8afc0243fdcaf4256b36cc91b5d3e2c3810c] ...
	I1217 20:00:50.986545  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 deb0ef3d09cc535bcd10a8ecc98a8afc0243fdcaf4256b36cc91b5d3e2c3810c"
	I1217 20:00:51.022486  596882 logs.go:123] Gathering logs for CRI-O ...
	I1217 20:00:51.022517  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 20:00:51.088659  596882 logs.go:123] Gathering logs for container status ...
	I1217 20:00:51.088715  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 20:00:47.583096  631473 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-759234 --format={{.State.Running}}
	I1217 20:00:47.608914  631473 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-759234 --format={{.State.Status}}
	I1217 20:00:47.634283  631473 cli_runner.go:164] Run: docker exec default-k8s-diff-port-759234 stat /var/lib/dpkg/alternatives/iptables
	I1217 20:00:47.694519  631473 oci.go:144] the created container "default-k8s-diff-port-759234" has a running status.
	I1217 20:00:47.694556  631473 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22186-372245/.minikube/machines/default-k8s-diff-port-759234/id_rsa...
	I1217 20:00:47.741322  631473 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22186-372245/.minikube/machines/default-k8s-diff-port-759234/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1217 20:00:47.777682  631473 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-759234 --format={{.State.Status}}
	I1217 20:00:47.801570  631473 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1217 20:00:47.801595  631473 kic_runner.go:114] Args: [docker exec --privileged default-k8s-diff-port-759234 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1217 20:00:47.858176  631473 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-759234 --format={{.State.Status}}
	I1217 20:00:47.886441  631473 machine.go:94] provisionDockerMachine start ...
	I1217 20:00:47.886562  631473 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-759234
	I1217 20:00:47.913250  631473 main.go:143] libmachine: Using SSH client type: native
	I1217 20:00:47.913628  631473 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33453 <nil> <nil>}
	I1217 20:00:47.913655  631473 main.go:143] libmachine: About to run SSH command:
	hostname
	I1217 20:00:47.914572  631473 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:49044->127.0.0.1:33453: read: connection reset by peer
	I1217 20:00:51.082474  631473 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-759234
	
	I1217 20:00:51.082503  631473 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-759234"
	I1217 20:00:51.082569  631473 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-759234
	I1217 20:00:51.109173  631473 main.go:143] libmachine: Using SSH client type: native
	I1217 20:00:51.109464  631473 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33453 <nil> <nil>}
	I1217 20:00:51.109487  631473 main.go:143] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-759234 && echo "default-k8s-diff-port-759234" | sudo tee /etc/hostname
	I1217 20:00:51.282514  631473 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-759234
	
	I1217 20:00:51.282597  631473 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-759234
	I1217 20:00:51.302139  631473 main.go:143] libmachine: Using SSH client type: native
	I1217 20:00:51.302370  631473 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33453 <nil> <nil>}
	I1217 20:00:51.302388  631473 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-759234' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-759234/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-759234' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1217 20:00:51.456372  631473 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1217 20:00:51.456426  631473 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22186-372245/.minikube CaCertPath:/home/jenkins/minikube-integration/22186-372245/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22186-372245/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22186-372245/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22186-372245/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22186-372245/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22186-372245/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22186-372245/.minikube}
	I1217 20:00:51.456479  631473 ubuntu.go:190] setting up certificates
	I1217 20:00:51.456491  631473 provision.go:84] configureAuth start
	I1217 20:00:51.456563  631473 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-759234
	I1217 20:00:51.480508  631473 provision.go:143] copyHostCerts
	I1217 20:00:51.480576  631473 exec_runner.go:144] found /home/jenkins/minikube-integration/22186-372245/.minikube/key.pem, removing ...
	I1217 20:00:51.480592  631473 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22186-372245/.minikube/key.pem
	I1217 20:00:51.480669  631473 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22186-372245/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22186-372245/.minikube/key.pem (1675 bytes)
	I1217 20:00:51.480772  631473 exec_runner.go:144] found /home/jenkins/minikube-integration/22186-372245/.minikube/ca.pem, removing ...
	I1217 20:00:51.480783  631473 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22186-372245/.minikube/ca.pem
	I1217 20:00:51.480822  631473 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22186-372245/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22186-372245/.minikube/ca.pem (1082 bytes)
	I1217 20:00:51.480896  631473 exec_runner.go:144] found /home/jenkins/minikube-integration/22186-372245/.minikube/cert.pem, removing ...
	I1217 20:00:51.480906  631473 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22186-372245/.minikube/cert.pem
	I1217 20:00:51.480938  631473 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22186-372245/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22186-372245/.minikube/cert.pem (1123 bytes)
	I1217 20:00:51.481006  631473 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22186-372245/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22186-372245/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22186-372245/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-759234 san=[127.0.0.1 192.168.94.2 default-k8s-diff-port-759234 localhost minikube]
	I1217 20:00:51.633655  631473 provision.go:177] copyRemoteCerts
	I1217 20:00:51.633763  631473 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1217 20:00:51.633814  631473 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-759234
	I1217 20:00:51.658060  631473 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33453 SSHKeyPath:/home/jenkins/minikube-integration/22186-372245/.minikube/machines/default-k8s-diff-port-759234/id_rsa Username:docker}
	I1217 20:00:51.774263  631473 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1217 20:00:51.836683  631473 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1217 20:00:51.862224  631473 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1217 20:00:51.890608  631473 provision.go:87] duration metric: took 434.096039ms to configureAuth
	I1217 20:00:51.890644  631473 ubuntu.go:206] setting minikube options for container-runtime
	I1217 20:00:51.890863  631473 config.go:182] Loaded profile config "default-k8s-diff-port-759234": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 20:00:51.891022  631473 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-759234
	I1217 20:00:51.916236  631473 main.go:143] libmachine: Using SSH client type: native
	I1217 20:00:51.916552  631473 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33453 <nil> <nil>}
	I1217 20:00:51.916578  631473 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1217 20:00:52.350209  631473 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1217 20:00:52.350238  631473 machine.go:97] duration metric: took 4.46376868s to provisionDockerMachine
	I1217 20:00:52.350253  631473 client.go:176] duration metric: took 9.723424305s to LocalClient.Create
	I1217 20:00:52.350277  631473 start.go:167] duration metric: took 9.72348972s to libmachine.API.Create "default-k8s-diff-port-759234"
	I1217 20:00:52.350294  631473 start.go:293] postStartSetup for "default-k8s-diff-port-759234" (driver="docker")
	I1217 20:00:52.350305  631473 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1217 20:00:52.350383  631473 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1217 20:00:52.350429  631473 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-759234
	I1217 20:00:52.369228  631473 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33453 SSHKeyPath:/home/jenkins/minikube-integration/22186-372245/.minikube/machines/default-k8s-diff-port-759234/id_rsa Username:docker}
	I1217 20:00:52.477868  631473 ssh_runner.go:195] Run: cat /etc/os-release
	I1217 20:00:52.482314  631473 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1217 20:00:52.482357  631473 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1217 20:00:52.482372  631473 filesync.go:126] Scanning /home/jenkins/minikube-integration/22186-372245/.minikube/addons for local assets ...
	I1217 20:00:52.482454  631473 filesync.go:126] Scanning /home/jenkins/minikube-integration/22186-372245/.minikube/files for local assets ...
	I1217 20:00:52.482534  631473 filesync.go:149] local asset: /home/jenkins/minikube-integration/22186-372245/.minikube/files/etc/ssl/certs/3757972.pem -> 3757972.pem in /etc/ssl/certs
	I1217 20:00:52.482625  631473 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1217 20:00:52.491557  631473 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/files/etc/ssl/certs/3757972.pem --> /etc/ssl/certs/3757972.pem (1708 bytes)
	I1217 20:00:52.515015  631473 start.go:296] duration metric: took 164.702667ms for postStartSetup
	I1217 20:00:52.515418  631473 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-759234
	I1217 20:00:52.535477  631473 profile.go:143] Saving config to /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/default-k8s-diff-port-759234/config.json ...
	I1217 20:00:52.535813  631473 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 20:00:52.535873  631473 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-759234
	I1217 20:00:52.555517  631473 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33453 SSHKeyPath:/home/jenkins/minikube-integration/22186-372245/.minikube/machines/default-k8s-diff-port-759234/id_rsa Username:docker}
	I1217 20:00:52.657422  631473 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1217 20:00:52.662205  631473 start.go:128] duration metric: took 10.037371351s to createHost
	I1217 20:00:52.662241  631473 start.go:83] releasing machines lock for "default-k8s-diff-port-759234", held for 10.037515093s
	I1217 20:00:52.662322  631473 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-759234
	I1217 20:00:52.680193  631473 ssh_runner.go:195] Run: cat /version.json
	I1217 20:00:52.680276  631473 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1217 20:00:52.680310  631473 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-759234
	I1217 20:00:52.680347  631473 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-759234
	I1217 20:00:52.701061  631473 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33453 SSHKeyPath:/home/jenkins/minikube-integration/22186-372245/.minikube/machines/default-k8s-diff-port-759234/id_rsa Username:docker}
	I1217 20:00:52.701301  631473 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33453 SSHKeyPath:/home/jenkins/minikube-integration/22186-372245/.minikube/machines/default-k8s-diff-port-759234/id_rsa Username:docker}
	I1217 20:00:52.851661  631473 ssh_runner.go:195] Run: systemctl --version
	I1217 20:00:52.858481  631473 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1217 20:00:52.893608  631473 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1217 20:00:52.898824  631473 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1217 20:00:52.898902  631473 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1217 20:00:52.924893  631473 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1217 20:00:52.924917  631473 start.go:496] detecting cgroup driver to use...
	I1217 20:00:52.924946  631473 detect.go:190] detected "systemd" cgroup driver on host os
	I1217 20:00:52.924995  631473 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1217 20:00:52.941996  631473 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1217 20:00:52.954497  631473 docker.go:218] disabling cri-docker service (if available) ...
	I1217 20:00:52.954559  631473 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1217 20:00:52.971423  631473 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1217 20:00:52.990488  631473 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1217 20:00:53.079469  631473 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1217 20:00:53.166815  631473 docker.go:234] disabling docker service ...
	I1217 20:00:53.166878  631473 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1217 20:00:53.186920  631473 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1217 20:00:53.200855  631473 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1217 20:00:53.290366  631473 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1217 20:00:53.387334  631473 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1217 20:00:53.400172  631473 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1217 20:00:53.415056  631473 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1217 20:00:53.415136  631473 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:00:53.425540  631473 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1217 20:00:53.425617  631473 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:00:53.435225  631473 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:00:53.444865  631473 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:00:53.455024  631473 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1217 20:00:53.464046  631473 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:00:53.473632  631473 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:00:53.488327  631473 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:00:53.498230  631473 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1217 20:00:53.506887  631473 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1217 20:00:53.516474  631473 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 20:00:53.601252  631473 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1217 20:00:54.068135  631473 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1217 20:00:54.068217  631473 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1217 20:00:54.073472  631473 start.go:564] Will wait 60s for crictl version
	I1217 20:00:54.073554  631473 ssh_runner.go:195] Run: which crictl
	I1217 20:00:54.078383  631473 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1217 20:00:54.106787  631473 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1217 20:00:54.106878  631473 ssh_runner.go:195] Run: crio --version
	I1217 20:00:54.140042  631473 ssh_runner.go:195] Run: crio --version
	I1217 20:00:54.172909  631473 out.go:179] * Preparing Kubernetes v1.34.3 on CRI-O 1.34.3 ...
	W1217 20:00:51.073128  625400 pod_ready.go:104] pod "coredns-5dd5756b68-gbhs5" is not "Ready", error: <nil>
	W1217 20:00:53.572242  625400 pod_ready.go:104] pod "coredns-5dd5756b68-gbhs5" is not "Ready", error: <nil>
	W1217 20:00:51.105457  624471 pod_ready.go:104] pod "coredns-7d764666f9-988jw" is not "Ready", error: <nil>
	W1217 20:00:53.606663  624471 pod_ready.go:104] pod "coredns-7d764666f9-988jw" is not "Ready", error: <nil>
	I1217 20:00:53.632189  596882 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1217 20:00:53.632791  596882 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 20:00:53.632867  596882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 20:00:53.632941  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 20:00:53.662308  596882 cri.go:89] found id: "6822d1aff73905867cd00c8bd3d996a8d98a37c238f53bab351d576f0d6b34fc"
	I1217 20:00:53.662339  596882 cri.go:89] found id: ""
	I1217 20:00:53.662350  596882 logs.go:282] 1 containers: [6822d1aff73905867cd00c8bd3d996a8d98a37c238f53bab351d576f0d6b34fc]
	I1217 20:00:53.662420  596882 ssh_runner.go:195] Run: which crictl
	I1217 20:00:53.666413  596882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 20:00:53.666495  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 20:00:53.695377  596882 cri.go:89] found id: ""
	I1217 20:00:53.695409  596882 logs.go:282] 0 containers: []
	W1217 20:00:53.695421  596882 logs.go:284] No container was found matching "etcd"
	I1217 20:00:53.695429  596882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 20:00:53.695516  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 20:00:53.724146  596882 cri.go:89] found id: ""
	I1217 20:00:53.724177  596882 logs.go:282] 0 containers: []
	W1217 20:00:53.724187  596882 logs.go:284] No container was found matching "coredns"
	I1217 20:00:53.724252  596882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 20:00:53.724349  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 20:00:53.752962  596882 cri.go:89] found id: "26afbca819064c614a7c269e4fbe3f73beb12920c9989c7a9adca8a87b8aee29"
	I1217 20:00:53.752990  596882 cri.go:89] found id: ""
	I1217 20:00:53.753000  596882 logs.go:282] 1 containers: [26afbca819064c614a7c269e4fbe3f73beb12920c9989c7a9adca8a87b8aee29]
	I1217 20:00:53.753058  596882 ssh_runner.go:195] Run: which crictl
	I1217 20:00:53.757461  596882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 20:00:53.757549  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 20:00:53.785748  596882 cri.go:89] found id: ""
	I1217 20:00:53.785774  596882 logs.go:282] 0 containers: []
	W1217 20:00:53.785785  596882 logs.go:284] No container was found matching "kube-proxy"
	I1217 20:00:53.785792  596882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 20:00:53.785862  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 20:00:53.815860  596882 cri.go:89] found id: "deb0ef3d09cc535bcd10a8ecc98a8afc0243fdcaf4256b36cc91b5d3e2c3810c"
	I1217 20:00:53.815889  596882 cri.go:89] found id: ""
	I1217 20:00:53.815899  596882 logs.go:282] 1 containers: [deb0ef3d09cc535bcd10a8ecc98a8afc0243fdcaf4256b36cc91b5d3e2c3810c]
	I1217 20:00:53.815952  596882 ssh_runner.go:195] Run: which crictl
	I1217 20:00:53.820565  596882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 20:00:53.820632  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 20:00:53.847814  596882 cri.go:89] found id: ""
	I1217 20:00:53.847839  596882 logs.go:282] 0 containers: []
	W1217 20:00:53.847850  596882 logs.go:284] No container was found matching "kindnet"
	I1217 20:00:53.847857  596882 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1217 20:00:53.847920  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 20:00:53.876185  596882 cri.go:89] found id: ""
	I1217 20:00:53.876218  596882 logs.go:282] 0 containers: []
	W1217 20:00:53.876230  596882 logs.go:284] No container was found matching "storage-provisioner"
	I1217 20:00:53.876244  596882 logs.go:123] Gathering logs for kubelet ...
	I1217 20:00:53.876259  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 20:00:53.971642  596882 logs.go:123] Gathering logs for dmesg ...
	I1217 20:00:53.971693  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 20:00:53.990638  596882 logs.go:123] Gathering logs for describe nodes ...
	I1217 20:00:53.990675  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 20:00:54.050668  596882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 20:00:54.050692  596882 logs.go:123] Gathering logs for kube-apiserver [6822d1aff73905867cd00c8bd3d996a8d98a37c238f53bab351d576f0d6b34fc] ...
	I1217 20:00:54.050707  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6822d1aff73905867cd00c8bd3d996a8d98a37c238f53bab351d576f0d6b34fc"
	I1217 20:00:54.084846  596882 logs.go:123] Gathering logs for kube-scheduler [26afbca819064c614a7c269e4fbe3f73beb12920c9989c7a9adca8a87b8aee29] ...
	I1217 20:00:54.084893  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 26afbca819064c614a7c269e4fbe3f73beb12920c9989c7a9adca8a87b8aee29"
	I1217 20:00:54.115061  596882 logs.go:123] Gathering logs for kube-controller-manager [deb0ef3d09cc535bcd10a8ecc98a8afc0243fdcaf4256b36cc91b5d3e2c3810c] ...
	I1217 20:00:54.115108  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 deb0ef3d09cc535bcd10a8ecc98a8afc0243fdcaf4256b36cc91b5d3e2c3810c"
	I1217 20:00:54.146463  596882 logs.go:123] Gathering logs for CRI-O ...
	I1217 20:00:54.146491  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 20:00:54.199121  596882 logs.go:123] Gathering logs for container status ...
	I1217 20:00:54.199159  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 20:00:56.736153  596882 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1217 20:00:56.736638  596882 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 20:00:56.736693  596882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 20:00:56.736746  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 20:00:56.765576  596882 cri.go:89] found id: "6822d1aff73905867cd00c8bd3d996a8d98a37c238f53bab351d576f0d6b34fc"
	I1217 20:00:56.765600  596882 cri.go:89] found id: ""
	I1217 20:00:56.765610  596882 logs.go:282] 1 containers: [6822d1aff73905867cd00c8bd3d996a8d98a37c238f53bab351d576f0d6b34fc]
	I1217 20:00:56.765676  596882 ssh_runner.go:195] Run: which crictl
	I1217 20:00:56.769942  596882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 20:00:56.770013  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 20:00:56.798112  596882 cri.go:89] found id: ""
	I1217 20:00:56.798145  596882 logs.go:282] 0 containers: []
	W1217 20:00:56.798157  596882 logs.go:284] No container was found matching "etcd"
	I1217 20:00:56.798165  596882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 20:00:56.798234  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 20:00:56.825167  596882 cri.go:89] found id: ""
	I1217 20:00:56.825200  596882 logs.go:282] 0 containers: []
	W1217 20:00:56.825231  596882 logs.go:284] No container was found matching "coredns"
	I1217 20:00:56.825247  596882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 20:00:56.825311  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 20:00:56.852568  596882 cri.go:89] found id: "26afbca819064c614a7c269e4fbe3f73beb12920c9989c7a9adca8a87b8aee29"
	I1217 20:00:56.852592  596882 cri.go:89] found id: ""
	I1217 20:00:56.852602  596882 logs.go:282] 1 containers: [26afbca819064c614a7c269e4fbe3f73beb12920c9989c7a9adca8a87b8aee29]
	I1217 20:00:56.852661  596882 ssh_runner.go:195] Run: which crictl
	I1217 20:00:56.856829  596882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 20:00:56.856902  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 20:00:56.883929  596882 cri.go:89] found id: ""
	I1217 20:00:56.883973  596882 logs.go:282] 0 containers: []
	W1217 20:00:56.883986  596882 logs.go:284] No container was found matching "kube-proxy"
	I1217 20:00:56.883999  596882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 20:00:56.884062  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 20:00:56.911693  596882 cri.go:89] found id: "deb0ef3d09cc535bcd10a8ecc98a8afc0243fdcaf4256b36cc91b5d3e2c3810c"
	I1217 20:00:56.911714  596882 cri.go:89] found id: ""
	I1217 20:00:56.911722  596882 logs.go:282] 1 containers: [deb0ef3d09cc535bcd10a8ecc98a8afc0243fdcaf4256b36cc91b5d3e2c3810c]
	I1217 20:00:56.911772  596882 ssh_runner.go:195] Run: which crictl
	I1217 20:00:56.916212  596882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 20:00:56.916276  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 20:00:56.942585  596882 cri.go:89] found id: ""
	I1217 20:00:56.942617  596882 logs.go:282] 0 containers: []
	W1217 20:00:56.942633  596882 logs.go:284] No container was found matching "kindnet"
	I1217 20:00:56.942642  596882 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1217 20:00:56.942700  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 20:00:56.971939  596882 cri.go:89] found id: ""
	I1217 20:00:56.971976  596882 logs.go:282] 0 containers: []
	W1217 20:00:56.971990  596882 logs.go:284] No container was found matching "storage-provisioner"
	I1217 20:00:56.972004  596882 logs.go:123] Gathering logs for kube-scheduler [26afbca819064c614a7c269e4fbe3f73beb12920c9989c7a9adca8a87b8aee29] ...
	I1217 20:00:56.972024  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 26afbca819064c614a7c269e4fbe3f73beb12920c9989c7a9adca8a87b8aee29"
	I1217 20:00:57.001777  596882 logs.go:123] Gathering logs for kube-controller-manager [deb0ef3d09cc535bcd10a8ecc98a8afc0243fdcaf4256b36cc91b5d3e2c3810c] ...
	I1217 20:00:57.001806  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 deb0ef3d09cc535bcd10a8ecc98a8afc0243fdcaf4256b36cc91b5d3e2c3810c"
	I1217 20:00:57.032936  596882 logs.go:123] Gathering logs for CRI-O ...
	I1217 20:00:57.032965  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 20:00:57.078327  596882 logs.go:123] Gathering logs for container status ...
	I1217 20:00:57.078364  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 20:00:57.113176  596882 logs.go:123] Gathering logs for kubelet ...
	I1217 20:00:57.113213  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 20:00:57.201920  596882 logs.go:123] Gathering logs for dmesg ...
	I1217 20:00:57.201957  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 20:00:57.218426  596882 logs.go:123] Gathering logs for describe nodes ...
	I1217 20:00:57.218456  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1217 20:00:54.174562  631473 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-759234 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1217 20:00:54.194566  631473 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1217 20:00:54.199116  631473 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 20:00:54.210935  631473 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-759234 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:default-k8s-diff-port-759234 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false C
ustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1217 20:00:54.211103  631473 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1217 20:00:54.211184  631473 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 20:00:54.248494  631473 crio.go:514] all images are preloaded for cri-o runtime.
	I1217 20:00:54.248518  631473 crio.go:433] Images already preloaded, skipping extraction
	I1217 20:00:54.248568  631473 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 20:00:54.273697  631473 crio.go:514] all images are preloaded for cri-o runtime.
	I1217 20:00:54.273726  631473 cache_images.go:86] Images are preloaded, skipping loading
	I1217 20:00:54.273735  631473 kubeadm.go:935] updating node { 192.168.94.2 8444 v1.34.3 crio true true} ...
	I1217 20:00:54.273832  631473 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-759234 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.3 ClusterName:default-k8s-diff-port-759234 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1217 20:00:54.273935  631473 ssh_runner.go:195] Run: crio config
	I1217 20:00:54.323646  631473 cni.go:84] Creating CNI manager for ""
	I1217 20:00:54.323671  631473 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1217 20:00:54.323691  631473 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1217 20:00:54.323723  631473 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8444 KubernetesVersion:v1.34.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-759234 NodeName:default-k8s-diff-port-759234 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1217 20:00:54.323843  631473 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-759234"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1217 20:00:54.323910  631473 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.3
	I1217 20:00:54.333287  631473 binaries.go:51] Found k8s binaries, skipping transfer
	I1217 20:00:54.333359  631473 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1217 20:00:54.341865  631473 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1217 20:00:54.355367  631473 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1217 20:00:54.370136  631473 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2224 bytes)
	I1217 20:00:54.383695  631473 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1217 20:00:54.387416  631473 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 20:00:54.397752  631473 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 20:00:54.478375  631473 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 20:00:54.502901  631473 certs.go:69] Setting up /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/default-k8s-diff-port-759234 for IP: 192.168.94.2
	I1217 20:00:54.502928  631473 certs.go:195] generating shared ca certs ...
	I1217 20:00:54.502956  631473 certs.go:227] acquiring lock for ca certs: {Name:mk6c0a4a99609de13fb0b54aca94f9165cc7856c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 20:00:54.503145  631473 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22186-372245/.minikube/ca.key
	I1217 20:00:54.503202  631473 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22186-372245/.minikube/proxy-client-ca.key
	I1217 20:00:54.503217  631473 certs.go:257] generating profile certs ...
	I1217 20:00:54.503295  631473 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/default-k8s-diff-port-759234/client.key
	I1217 20:00:54.503322  631473 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/default-k8s-diff-port-759234/client.crt with IP's: []
	I1217 20:00:54.617711  631473 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/default-k8s-diff-port-759234/client.crt ...
	I1217 20:00:54.617747  631473 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/default-k8s-diff-port-759234/client.crt: {Name:mk5d78d7f68addaf1f73847c6c02fd442f5e6ddd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 20:00:54.617930  631473 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/default-k8s-diff-port-759234/client.key ...
	I1217 20:00:54.617950  631473 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/default-k8s-diff-port-759234/client.key: {Name:mke8a415d0af374cf9fe8570e6fe4c7202332109 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 20:00:54.618032  631473 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/default-k8s-diff-port-759234/apiserver.key.e1807167
	I1217 20:00:54.618049  631473 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/default-k8s-diff-port-759234/apiserver.crt.e1807167 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.94.2]
	I1217 20:00:54.665685  631473 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/default-k8s-diff-port-759234/apiserver.crt.e1807167 ...
	I1217 20:00:54.665716  631473 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/default-k8s-diff-port-759234/apiserver.crt.e1807167: {Name:mkfcccc5ab764237ebc01d7e772bd39ad2e57805 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 20:00:54.665884  631473 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/default-k8s-diff-port-759234/apiserver.key.e1807167 ...
	I1217 20:00:54.665904  631473 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/default-k8s-diff-port-759234/apiserver.key.e1807167: {Name:mk4c6de11c85c3fb77bd1f278ce0e0fd2b33aff3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 20:00:54.666008  631473 certs.go:382] copying /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/default-k8s-diff-port-759234/apiserver.crt.e1807167 -> /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/default-k8s-diff-port-759234/apiserver.crt
	I1217 20:00:54.666104  631473 certs.go:386] copying /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/default-k8s-diff-port-759234/apiserver.key.e1807167 -> /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/default-k8s-diff-port-759234/apiserver.key
	I1217 20:00:54.666162  631473 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/default-k8s-diff-port-759234/proxy-client.key
	I1217 20:00:54.666178  631473 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/default-k8s-diff-port-759234/proxy-client.crt with IP's: []
	I1217 20:00:54.735423  631473 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/default-k8s-diff-port-759234/proxy-client.crt ...
	I1217 20:00:54.735452  631473 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/default-k8s-diff-port-759234/proxy-client.crt: {Name:mk6946a87226d60c386ab3fc364ed99a58d10cba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 20:00:54.735624  631473 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/default-k8s-diff-port-759234/proxy-client.key ...
	I1217 20:00:54.735638  631473 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/default-k8s-diff-port-759234/proxy-client.key: {Name:mk6cae84f91184f3a12c3274f32b7e32ae6eea78 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 20:00:54.735804  631473 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-372245/.minikube/certs/375797.pem (1338 bytes)
	W1217 20:00:54.735844  631473 certs.go:480] ignoring /home/jenkins/minikube-integration/22186-372245/.minikube/certs/375797_empty.pem, impossibly tiny 0 bytes
	I1217 20:00:54.735855  631473 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-372245/.minikube/certs/ca-key.pem (1675 bytes)
	I1217 20:00:54.735877  631473 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-372245/.minikube/certs/ca.pem (1082 bytes)
	I1217 20:00:54.735901  631473 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-372245/.minikube/certs/cert.pem (1123 bytes)
	I1217 20:00:54.735925  631473 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-372245/.minikube/certs/key.pem (1675 bytes)
	I1217 20:00:54.735974  631473 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-372245/.minikube/files/etc/ssl/certs/3757972.pem (1708 bytes)
	I1217 20:00:54.736625  631473 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1217 20:00:54.756198  631473 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1217 20:00:54.773753  631473 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1217 20:00:54.791250  631473 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1217 20:00:54.809439  631473 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/default-k8s-diff-port-759234/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1217 20:00:54.828101  631473 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/default-k8s-diff-port-759234/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1217 20:00:54.847713  631473 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/default-k8s-diff-port-759234/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1217 20:00:54.866560  631473 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/default-k8s-diff-port-759234/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1217 20:00:54.885184  631473 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/files/etc/ssl/certs/3757972.pem --> /usr/share/ca-certificates/3757972.pem (1708 bytes)
	I1217 20:00:54.906455  631473 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 20:00:54.924265  631473 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/certs/375797.pem --> /usr/share/ca-certificates/375797.pem (1338 bytes)
	I1217 20:00:54.942817  631473 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1217 20:00:54.956309  631473 ssh_runner.go:195] Run: openssl version
	I1217 20:00:54.962641  631473 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3757972.pem
	I1217 20:00:54.971170  631473 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3757972.pem /etc/ssl/certs/3757972.pem
	I1217 20:00:54.979233  631473 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3757972.pem
	I1217 20:00:54.983177  631473 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 17 19:32 /usr/share/ca-certificates/3757972.pem
	I1217 20:00:54.983245  631473 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3757972.pem
	I1217 20:00:55.018977  631473 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1217 20:00:55.027253  631473 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/3757972.pem /etc/ssl/certs/3ec20f2e.0
	I1217 20:00:55.035165  631473 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:00:55.043017  631473 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1217 20:00:55.051440  631473 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:00:55.055458  631473 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 19:24 /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:00:55.055523  631473 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:00:55.092379  631473 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1217 20:00:55.101231  631473 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1217 20:00:55.111064  631473 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/375797.pem
	I1217 20:00:55.119199  631473 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/375797.pem /etc/ssl/certs/375797.pem
	I1217 20:00:55.127063  631473 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/375797.pem
	I1217 20:00:55.130993  631473 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 17 19:32 /usr/share/ca-certificates/375797.pem
	I1217 20:00:55.131062  631473 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/375797.pem
	I1217 20:00:55.165321  631473 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1217 20:00:55.173294  631473 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/375797.pem /etc/ssl/certs/51391683.0
	I1217 20:00:55.181422  631473 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1217 20:00:55.185376  631473 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1217 20:00:55.185448  631473 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-759234 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:default-k8s-diff-port-759234 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cust
omQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 20:00:55.185546  631473 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1217 20:00:55.185607  631473 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 20:00:55.217477  631473 cri.go:89] found id: ""
	I1217 20:00:55.217551  631473 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1217 20:00:55.226933  631473 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1217 20:00:55.236854  631473 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1217 20:00:55.236934  631473 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1217 20:00:55.245579  631473 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1217 20:00:55.245602  631473 kubeadm.go:158] found existing configuration files:
	
	I1217 20:00:55.245652  631473 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1217 20:00:55.253938  631473 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1217 20:00:55.253998  631473 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1217 20:00:55.261865  631473 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1217 20:00:55.269887  631473 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1217 20:00:55.269992  631473 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1217 20:00:55.278000  631473 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1217 20:00:55.286714  631473 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1217 20:00:55.286788  631473 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1217 20:00:55.296035  631473 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1217 20:00:55.305037  631473 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1217 20:00:55.305131  631473 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1217 20:00:55.312998  631473 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1217 20:00:55.373971  631473 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1045-gcp\n", err: exit status 1
	I1217 20:00:55.436480  631473 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W1217 20:00:56.071929  625400 pod_ready.go:104] pod "coredns-5dd5756b68-gbhs5" is not "Ready", error: <nil>
	W1217 20:00:58.571128  625400 pod_ready.go:104] pod "coredns-5dd5756b68-gbhs5" is not "Ready", error: <nil>
	W1217 20:00:56.104574  624471 pod_ready.go:104] pod "coredns-7d764666f9-988jw" is not "Ready", error: <nil>
	W1217 20:00:58.604838  624471 pod_ready.go:104] pod "coredns-7d764666f9-988jw" is not "Ready", error: <nil>
	W1217 20:00:57.277327  596882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 20:00:57.277349  596882 logs.go:123] Gathering logs for kube-apiserver [6822d1aff73905867cd00c8bd3d996a8d98a37c238f53bab351d576f0d6b34fc] ...
	I1217 20:00:57.277366  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6822d1aff73905867cd00c8bd3d996a8d98a37c238f53bab351d576f0d6b34fc"
	I1217 20:00:59.811179  596882 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	W1217 20:01:01.071960  625400 pod_ready.go:104] pod "coredns-5dd5756b68-gbhs5" is not "Ready", error: <nil>
	W1217 20:01:03.571727  625400 pod_ready.go:104] pod "coredns-5dd5756b68-gbhs5" is not "Ready", error: <nil>
	W1217 20:01:00.604975  624471 pod_ready.go:104] pod "coredns-7d764666f9-988jw" is not "Ready", error: <nil>
	W1217 20:01:02.605263  624471 pod_ready.go:104] pod "coredns-7d764666f9-988jw" is not "Ready", error: <nil>
	W1217 20:01:05.106561  624471 pod_ready.go:104] pod "coredns-7d764666f9-988jw" is not "Ready", error: <nil>
	I1217 20:01:06.067126  631473 kubeadm.go:319] [init] Using Kubernetes version: v1.34.3
	I1217 20:01:06.067196  631473 kubeadm.go:319] [preflight] Running pre-flight checks
	I1217 20:01:06.067312  631473 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1217 20:01:06.067401  631473 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1045-gcp
	I1217 20:01:06.067442  631473 kubeadm.go:319] OS: Linux
	I1217 20:01:06.067513  631473 kubeadm.go:319] CGROUPS_CPU: enabled
	I1217 20:01:06.067558  631473 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1217 20:01:06.067635  631473 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1217 20:01:06.067697  631473 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1217 20:01:06.067738  631473 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1217 20:01:06.067813  631473 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1217 20:01:06.067880  631473 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1217 20:01:06.067957  631473 kubeadm.go:319] CGROUPS_IO: enabled
	I1217 20:01:06.068050  631473 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1217 20:01:06.068197  631473 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1217 20:01:06.068340  631473 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1217 20:01:06.068462  631473 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1217 20:01:06.070305  631473 out.go:252]   - Generating certificates and keys ...
	I1217 20:01:06.070395  631473 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1217 20:01:06.070458  631473 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1217 20:01:06.070524  631473 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1217 20:01:06.070580  631473 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1217 20:01:06.070634  631473 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1217 20:01:06.070675  631473 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1217 20:01:06.070722  631473 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1217 20:01:06.070887  631473 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [default-k8s-diff-port-759234 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1217 20:01:06.070954  631473 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1217 20:01:06.071106  631473 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [default-k8s-diff-port-759234 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1217 20:01:06.071215  631473 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1217 20:01:06.071290  631473 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1217 20:01:06.071343  631473 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1217 20:01:06.071423  631473 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1217 20:01:06.071499  631473 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1217 20:01:06.071573  631473 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1217 20:01:06.071647  631473 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1217 20:01:06.071757  631473 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1217 20:01:06.071841  631473 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1217 20:01:06.071959  631473 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1217 20:01:06.072065  631473 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1217 20:01:06.073367  631473 out.go:252]   - Booting up control plane ...
	I1217 20:01:06.073455  631473 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1217 20:01:06.073530  631473 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1217 20:01:06.073591  631473 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1217 20:01:06.073692  631473 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1217 20:01:06.073789  631473 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1217 20:01:06.073886  631473 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1217 20:01:06.073960  631473 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1217 20:01:06.074002  631473 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1217 20:01:06.074140  631473 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1217 20:01:06.074228  631473 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1217 20:01:06.074276  631473 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001922128s
	I1217 20:01:06.074352  631473 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1217 20:01:06.074416  631473 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.94.2:8444/livez
	I1217 20:01:06.074487  631473 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1217 20:01:06.074549  631473 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1217 20:01:06.074624  631473 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.929603333s
	I1217 20:01:06.074691  631473 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.11807832s
	I1217 20:01:06.074783  631473 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.002138646s
	I1217 20:01:06.074883  631473 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1217 20:01:06.074999  631473 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1217 20:01:06.075046  631473 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1217 20:01:06.075233  631473 kubeadm.go:319] [mark-control-plane] Marking the node default-k8s-diff-port-759234 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1217 20:01:06.075296  631473 kubeadm.go:319] [bootstrap-token] Using token: v6m366.ufgpfn05m87tgdpr
	I1217 20:01:06.076758  631473 out.go:252]   - Configuring RBAC rules ...
	I1217 20:01:06.076848  631473 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1217 20:01:06.076928  631473 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1217 20:01:06.077189  631473 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1217 20:01:06.077365  631473 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1217 20:01:06.077488  631473 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1217 20:01:06.077579  631473 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1217 20:01:06.077727  631473 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1217 20:01:06.077797  631473 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1217 20:01:06.077864  631473 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1217 20:01:06.077879  631473 kubeadm.go:319] 
	I1217 20:01:06.077952  631473 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1217 20:01:06.077959  631473 kubeadm.go:319] 
	I1217 20:01:06.078019  631473 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1217 20:01:06.078028  631473 kubeadm.go:319] 
	I1217 20:01:06.078048  631473 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1217 20:01:06.078140  631473 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1217 20:01:06.078221  631473 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1217 20:01:06.078230  631473 kubeadm.go:319] 
	I1217 20:01:06.078313  631473 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1217 20:01:06.078322  631473 kubeadm.go:319] 
	I1217 20:01:06.078396  631473 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1217 20:01:06.078404  631473 kubeadm.go:319] 
	I1217 20:01:06.078487  631473 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1217 20:01:06.078589  631473 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1217 20:01:06.078685  631473 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1217 20:01:06.078694  631473 kubeadm.go:319] 
	I1217 20:01:06.078778  631473 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1217 20:01:06.078851  631473 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1217 20:01:06.078857  631473 kubeadm.go:319] 
	I1217 20:01:06.078933  631473 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8444 --token v6m366.ufgpfn05m87tgdpr \
	I1217 20:01:06.079036  631473 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:8ef867ecc15c7bd9eb9f87ba84e4b5e1f9c90bbe1fbebab60bd7b5b08cd9129f \
	I1217 20:01:06.079057  631473 kubeadm.go:319] 	--control-plane 
	I1217 20:01:06.079060  631473 kubeadm.go:319] 
	I1217 20:01:06.079150  631473 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1217 20:01:06.079160  631473 kubeadm.go:319] 
	I1217 20:01:06.079259  631473 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8444 --token v6m366.ufgpfn05m87tgdpr \
	I1217 20:01:06.079417  631473 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:8ef867ecc15c7bd9eb9f87ba84e4b5e1f9c90bbe1fbebab60bd7b5b08cd9129f 
	I1217 20:01:06.079446  631473 cni.go:84] Creating CNI manager for ""
	I1217 20:01:06.079457  631473 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1217 20:01:06.081231  631473 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1217 20:01:04.812163  596882 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1217 20:01:04.812235  596882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 20:01:04.812292  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 20:01:04.844291  596882 cri.go:89] found id: "dfcf129a23a9b4b8338549662d03dc9674e70494089b9acbd56ee28dd0e59a2e"
	I1217 20:01:04.844315  596882 cri.go:89] found id: "6822d1aff73905867cd00c8bd3d996a8d98a37c238f53bab351d576f0d6b34fc"
	I1217 20:01:04.844319  596882 cri.go:89] found id: ""
	I1217 20:01:04.844328  596882 logs.go:282] 2 containers: [dfcf129a23a9b4b8338549662d03dc9674e70494089b9acbd56ee28dd0e59a2e 6822d1aff73905867cd00c8bd3d996a8d98a37c238f53bab351d576f0d6b34fc]
	I1217 20:01:04.844385  596882 ssh_runner.go:195] Run: which crictl
	I1217 20:01:04.848366  596882 ssh_runner.go:195] Run: which crictl
	I1217 20:01:04.852177  596882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 20:01:04.852256  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 20:01:04.883987  596882 cri.go:89] found id: ""
	I1217 20:01:04.884024  596882 logs.go:282] 0 containers: []
	W1217 20:01:04.884038  596882 logs.go:284] No container was found matching "etcd"
	I1217 20:01:04.884051  596882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 20:01:04.884140  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 20:01:04.914990  596882 cri.go:89] found id: ""
	I1217 20:01:04.915020  596882 logs.go:282] 0 containers: []
	W1217 20:01:04.915031  596882 logs.go:284] No container was found matching "coredns"
	I1217 20:01:04.915040  596882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 20:01:04.915135  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 20:01:04.944932  596882 cri.go:89] found id: "26afbca819064c614a7c269e4fbe3f73beb12920c9989c7a9adca8a87b8aee29"
	I1217 20:01:04.944965  596882 cri.go:89] found id: ""
	I1217 20:01:04.944978  596882 logs.go:282] 1 containers: [26afbca819064c614a7c269e4fbe3f73beb12920c9989c7a9adca8a87b8aee29]
	I1217 20:01:04.945047  596882 ssh_runner.go:195] Run: which crictl
	I1217 20:01:04.949407  596882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 20:01:04.949476  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 20:01:04.980714  596882 cri.go:89] found id: ""
	I1217 20:01:04.980744  596882 logs.go:282] 0 containers: []
	W1217 20:01:04.980756  596882 logs.go:284] No container was found matching "kube-proxy"
	I1217 20:01:04.980765  596882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 20:01:04.980827  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 20:01:05.014278  596882 cri.go:89] found id: "711081a1b65cc9754b1a9b8fd19fce7769b6a8e65b097e062aa1703f24e1a476"
	I1217 20:01:05.014303  596882 cri.go:89] found id: "deb0ef3d09cc535bcd10a8ecc98a8afc0243fdcaf4256b36cc91b5d3e2c3810c"
	I1217 20:01:05.014306  596882 cri.go:89] found id: ""
	I1217 20:01:05.014315  596882 logs.go:282] 2 containers: [711081a1b65cc9754b1a9b8fd19fce7769b6a8e65b097e062aa1703f24e1a476 deb0ef3d09cc535bcd10a8ecc98a8afc0243fdcaf4256b36cc91b5d3e2c3810c]
	I1217 20:01:05.014369  596882 ssh_runner.go:195] Run: which crictl
	I1217 20:01:05.019212  596882 ssh_runner.go:195] Run: which crictl
	I1217 20:01:05.023605  596882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 20:01:05.023688  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 20:01:05.054178  596882 cri.go:89] found id: ""
	I1217 20:01:05.054210  596882 logs.go:282] 0 containers: []
	W1217 20:01:05.054220  596882 logs.go:284] No container was found matching "kindnet"
	I1217 20:01:05.054226  596882 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1217 20:01:05.054297  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 20:01:05.089365  596882 cri.go:89] found id: ""
	I1217 20:01:05.089398  596882 logs.go:282] 0 containers: []
	W1217 20:01:05.089410  596882 logs.go:284] No container was found matching "storage-provisioner"
	I1217 20:01:05.089432  596882 logs.go:123] Gathering logs for container status ...
	I1217 20:01:05.089451  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 20:01:05.129946  596882 logs.go:123] Gathering logs for kubelet ...
	I1217 20:01:05.129977  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 20:01:05.229093  596882 logs.go:123] Gathering logs for describe nodes ...
	I1217 20:01:05.229136  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1217 20:01:06.082676  631473 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1217 20:01:06.087568  631473 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.3/kubectl ...
	I1217 20:01:06.087588  631473 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2620 bytes)
	I1217 20:01:06.101995  631473 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1217 20:01:06.315905  631473 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1217 20:01:06.315984  631473 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 20:01:06.315984  631473 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-759234 minikube.k8s.io/updated_at=2025_12_17T20_01_06_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=2e96f676eb7e96389e85fe0658a4ede4c4ba6924 minikube.k8s.io/name=default-k8s-diff-port-759234 minikube.k8s.io/primary=true
	I1217 20:01:06.327829  631473 ops.go:34] apiserver oom_adj: -16
	I1217 20:01:06.396458  631473 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 20:01:06.897042  631473 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 20:01:07.396599  631473 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 20:01:07.604674  624471 pod_ready.go:94] pod "coredns-7d764666f9-988jw" is "Ready"
	I1217 20:01:07.604701  624471 pod_ready.go:86] duration metric: took 37.00583192s for pod "coredns-7d764666f9-988jw" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:01:07.607174  624471 pod_ready.go:83] waiting for pod "etcd-no-preload-832842" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:01:07.611282  624471 pod_ready.go:94] pod "etcd-no-preload-832842" is "Ready"
	I1217 20:01:07.611311  624471 pod_ready.go:86] duration metric: took 4.112039ms for pod "etcd-no-preload-832842" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:01:07.613297  624471 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-832842" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:01:07.617064  624471 pod_ready.go:94] pod "kube-apiserver-no-preload-832842" is "Ready"
	I1217 20:01:07.617117  624471 pod_ready.go:86] duration metric: took 3.797766ms for pod "kube-apiserver-no-preload-832842" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:01:07.619212  624471 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-832842" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:01:07.803328  624471 pod_ready.go:94] pod "kube-controller-manager-no-preload-832842" is "Ready"
	I1217 20:01:07.803357  624471 pod_ready.go:86] duration metric: took 184.117172ms for pod "kube-controller-manager-no-preload-832842" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:01:08.003550  624471 pod_ready.go:83] waiting for pod "kube-proxy-jc5dd" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:01:08.403261  624471 pod_ready.go:94] pod "kube-proxy-jc5dd" is "Ready"
	I1217 20:01:08.403288  624471 pod_ready.go:86] duration metric: took 399.709625ms for pod "kube-proxy-jc5dd" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:01:08.603502  624471 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-832842" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:01:09.002875  624471 pod_ready.go:94] pod "kube-scheduler-no-preload-832842" is "Ready"
	I1217 20:01:09.002905  624471 pod_ready.go:86] duration metric: took 399.378114ms for pod "kube-scheduler-no-preload-832842" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:01:09.002919  624471 pod_ready.go:40] duration metric: took 38.408153316s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1217 20:01:09.051128  624471 start.go:625] kubectl: 1.35.0, cluster: 1.35.0-rc.1 (minor skew: 0)
	I1217 20:01:09.053534  624471 out.go:179] * Done! kubectl is now configured to use "no-preload-832842" cluster and "default" namespace by default
	W1217 20:01:06.072320  625400 pod_ready.go:104] pod "coredns-5dd5756b68-gbhs5" is not "Ready", error: <nil>
	W1217 20:01:08.571546  625400 pod_ready.go:104] pod "coredns-5dd5756b68-gbhs5" is not "Ready", error: <nil>
	I1217 20:01:07.897116  631473 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 20:01:08.397124  631473 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 20:01:08.897399  631473 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 20:01:09.397296  631473 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 20:01:09.897202  631473 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 20:01:10.397310  631473 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 20:01:10.897175  631473 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 20:01:10.975504  631473 kubeadm.go:1114] duration metric: took 4.659591269s to wait for elevateKubeSystemPrivileges
	I1217 20:01:10.975540  631473 kubeadm.go:403] duration metric: took 15.790098497s to StartCluster
	I1217 20:01:10.975558  631473 settings.go:142] acquiring lock: {Name:mk01c60672ff2b8f50b037d6096a0a4590636830 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 20:01:10.975646  631473 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22186-372245/kubeconfig
	I1217 20:01:10.977547  631473 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-372245/kubeconfig: {Name:mkbe8926b9014d2af611aee93b1188b72880b6c1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 20:01:10.977796  631473 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1217 20:01:10.977817  631473 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1217 20:01:10.977867  631473 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1217 20:01:10.978006  631473 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-759234"
	I1217 20:01:10.978029  631473 config.go:182] Loaded profile config "default-k8s-diff-port-759234": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 20:01:10.978054  631473 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-759234"
	I1217 20:01:10.978101  631473 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-759234"
	I1217 20:01:10.978031  631473 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-759234"
	I1217 20:01:10.978248  631473 host.go:66] Checking if "default-k8s-diff-port-759234" exists ...
	I1217 20:01:10.978539  631473 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-759234 --format={{.State.Status}}
	I1217 20:01:10.978747  631473 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-759234 --format={{.State.Status}}
	I1217 20:01:10.979515  631473 out.go:179] * Verifying Kubernetes components...
	I1217 20:01:10.980948  631473 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 20:01:11.004351  631473 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1217 20:01:09.570523  625400 pod_ready.go:94] pod "coredns-5dd5756b68-gbhs5" is "Ready"
	I1217 20:01:09.570551  625400 pod_ready.go:86] duration metric: took 34.005219617s for pod "coredns-5dd5756b68-gbhs5" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:01:09.573051  625400 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-894575" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:01:09.576701  625400 pod_ready.go:94] pod "etcd-old-k8s-version-894575" is "Ready"
	I1217 20:01:09.576725  625400 pod_ready.go:86] duration metric: took 3.651465ms for pod "etcd-old-k8s-version-894575" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:01:09.579243  625400 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-894575" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:01:09.583452  625400 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-894575" is "Ready"
	I1217 20:01:09.583478  625400 pod_ready.go:86] duration metric: took 4.213779ms for pod "kube-apiserver-old-k8s-version-894575" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:01:09.585997  625400 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-894575" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:01:09.768942  625400 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-894575" is "Ready"
	I1217 20:01:09.768977  625400 pod_ready.go:86] duration metric: took 182.957254ms for pod "kube-controller-manager-old-k8s-version-894575" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:01:09.970200  625400 pod_ready.go:83] waiting for pod "kube-proxy-bdzb6" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:01:10.368408  625400 pod_ready.go:94] pod "kube-proxy-bdzb6" is "Ready"
	I1217 20:01:10.368435  625400 pod_ready.go:86] duration metric: took 398.20631ms for pod "kube-proxy-bdzb6" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:01:10.569794  625400 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-894575" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:01:10.969210  625400 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-894575" is "Ready"
	I1217 20:01:10.969252  625400 pod_ready.go:86] duration metric: took 399.426249ms for pod "kube-scheduler-old-k8s-version-894575" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:01:10.969270  625400 pod_ready.go:40] duration metric: took 35.409804659s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1217 20:01:11.041190  625400 start.go:625] kubectl: 1.35.0, cluster: 1.28.0 (minor skew: 7)
	I1217 20:01:11.044208  625400 out.go:203] 
	W1217 20:01:11.045630  625400 out.go:285] ! /usr/local/bin/kubectl is version 1.35.0, which may have incompatibilities with Kubernetes 1.28.0.
	I1217 20:01:11.047652  625400 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1217 20:01:11.049163  625400 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-894575" cluster and "default" namespace by default
	I1217 20:01:11.005141  631473 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-759234"
	I1217 20:01:11.005190  631473 host.go:66] Checking if "default-k8s-diff-port-759234" exists ...
	I1217 20:01:11.005673  631473 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-759234 --format={{.State.Status}}
	I1217 20:01:11.005685  631473 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 20:01:11.005702  631473 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1217 20:01:11.005753  631473 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-759234
	I1217 20:01:11.034589  631473 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33453 SSHKeyPath:/home/jenkins/minikube-integration/22186-372245/.minikube/machines/default-k8s-diff-port-759234/id_rsa Username:docker}
	I1217 20:01:11.037037  631473 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1217 20:01:11.037065  631473 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1217 20:01:11.037212  631473 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-759234
	I1217 20:01:11.065091  631473 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33453 SSHKeyPath:/home/jenkins/minikube-integration/22186-372245/.minikube/machines/default-k8s-diff-port-759234/id_rsa Username:docker}
	I1217 20:01:11.078156  631473 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.94.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1217 20:01:11.158438  631473 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 20:01:11.173742  631473 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 20:01:11.214719  631473 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1217 20:01:11.376291  631473 start.go:977] {"host.minikube.internal": 192.168.94.1} host record injected into CoreDNS's ConfigMap
	I1217 20:01:11.376906  631473 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-759234" to be "Ready" ...
	I1217 20:01:11.616252  631473 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1217 20:01:11.617452  631473 addons.go:530] duration metric: took 639.583404ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1217 20:01:11.880698  631473 kapi.go:214] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-759234" context rescaled to 1 replicas
	I1217 20:01:15.295985  596882 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (10.066827019s)
	W1217 20:01:15.296022  596882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Unable to connect to the server: net/http: TLS handshake timeout
	 output: 
	** stderr ** 
	Unable to connect to the server: net/http: TLS handshake timeout
	
	** /stderr **
	I1217 20:01:15.296032  596882 logs.go:123] Gathering logs for kube-apiserver [6822d1aff73905867cd00c8bd3d996a8d98a37c238f53bab351d576f0d6b34fc] ...
	I1217 20:01:15.296044  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6822d1aff73905867cd00c8bd3d996a8d98a37c238f53bab351d576f0d6b34fc"
	I1217 20:01:15.329910  596882 logs.go:123] Gathering logs for kube-controller-manager [deb0ef3d09cc535bcd10a8ecc98a8afc0243fdcaf4256b36cc91b5d3e2c3810c] ...
	I1217 20:01:15.329943  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 deb0ef3d09cc535bcd10a8ecc98a8afc0243fdcaf4256b36cc91b5d3e2c3810c"
	I1217 20:01:15.361430  596882 logs.go:123] Gathering logs for dmesg ...
	I1217 20:01:15.361465  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 20:01:15.379135  596882 logs.go:123] Gathering logs for kube-apiserver [dfcf129a23a9b4b8338549662d03dc9674e70494089b9acbd56ee28dd0e59a2e] ...
	I1217 20:01:15.379176  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 dfcf129a23a9b4b8338549662d03dc9674e70494089b9acbd56ee28dd0e59a2e"
	I1217 20:01:15.413631  596882 logs.go:123] Gathering logs for kube-scheduler [26afbca819064c614a7c269e4fbe3f73beb12920c9989c7a9adca8a87b8aee29] ...
	I1217 20:01:15.413671  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 26afbca819064c614a7c269e4fbe3f73beb12920c9989c7a9adca8a87b8aee29"
	I1217 20:01:15.444072  596882 logs.go:123] Gathering logs for kube-controller-manager [711081a1b65cc9754b1a9b8fd19fce7769b6a8e65b097e062aa1703f24e1a476] ...
	I1217 20:01:15.444120  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 711081a1b65cc9754b1a9b8fd19fce7769b6a8e65b097e062aa1703f24e1a476"
	I1217 20:01:15.474296  596882 logs.go:123] Gathering logs for CRI-O ...
	I1217 20:01:15.474325  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	W1217 20:01:13.379733  631473 node_ready.go:57] node "default-k8s-diff-port-759234" has "Ready":"False" status (will retry)
	W1217 20:01:15.380677  631473 node_ready.go:57] node "default-k8s-diff-port-759234" has "Ready":"False" status (will retry)
	W1217 20:01:17.382167  631473 node_ready.go:57] node "default-k8s-diff-port-759234" has "Ready":"False" status (will retry)
	I1217 20:01:18.028829  596882 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1217 20:01:19.268145  596882 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": read tcp 192.168.76.1:48746->192.168.76.2:8443: read: connection reset by peer
	I1217 20:01:19.268222  596882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 20:01:19.268292  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 20:01:19.297951  596882 cri.go:89] found id: "dfcf129a23a9b4b8338549662d03dc9674e70494089b9acbd56ee28dd0e59a2e"
	I1217 20:01:19.297972  596882 cri.go:89] found id: "6822d1aff73905867cd00c8bd3d996a8d98a37c238f53bab351d576f0d6b34fc"
	I1217 20:01:19.297976  596882 cri.go:89] found id: ""
	I1217 20:01:19.297984  596882 logs.go:282] 2 containers: [dfcf129a23a9b4b8338549662d03dc9674e70494089b9acbd56ee28dd0e59a2e 6822d1aff73905867cd00c8bd3d996a8d98a37c238f53bab351d576f0d6b34fc]
	I1217 20:01:19.298048  596882 ssh_runner.go:195] Run: which crictl
	I1217 20:01:19.302214  596882 ssh_runner.go:195] Run: which crictl
	I1217 20:01:19.305947  596882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 20:01:19.306014  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 20:01:19.333763  596882 cri.go:89] found id: ""
	I1217 20:01:19.333789  596882 logs.go:282] 0 containers: []
	W1217 20:01:19.333798  596882 logs.go:284] No container was found matching "etcd"
	I1217 20:01:19.333804  596882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 20:01:19.333864  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 20:01:19.362644  596882 cri.go:89] found id: ""
	I1217 20:01:19.362672  596882 logs.go:282] 0 containers: []
	W1217 20:01:19.362682  596882 logs.go:284] No container was found matching "coredns"
	I1217 20:01:19.362687  596882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 20:01:19.362752  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 20:01:19.394030  596882 cri.go:89] found id: "26afbca819064c614a7c269e4fbe3f73beb12920c9989c7a9adca8a87b8aee29"
	I1217 20:01:19.394059  596882 cri.go:89] found id: ""
	I1217 20:01:19.394071  596882 logs.go:282] 1 containers: [26afbca819064c614a7c269e4fbe3f73beb12920c9989c7a9adca8a87b8aee29]
	I1217 20:01:19.394157  596882 ssh_runner.go:195] Run: which crictl
	I1217 20:01:19.398506  596882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 20:01:19.398583  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 20:01:19.425535  596882 cri.go:89] found id: ""
	I1217 20:01:19.425560  596882 logs.go:282] 0 containers: []
	W1217 20:01:19.425569  596882 logs.go:284] No container was found matching "kube-proxy"
	I1217 20:01:19.425575  596882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 20:01:19.425638  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 20:01:19.454704  596882 cri.go:89] found id: "711081a1b65cc9754b1a9b8fd19fce7769b6a8e65b097e062aa1703f24e1a476"
	I1217 20:01:19.454726  596882 cri.go:89] found id: "deb0ef3d09cc535bcd10a8ecc98a8afc0243fdcaf4256b36cc91b5d3e2c3810c"
	I1217 20:01:19.454731  596882 cri.go:89] found id: ""
	I1217 20:01:19.454743  596882 logs.go:282] 2 containers: [711081a1b65cc9754b1a9b8fd19fce7769b6a8e65b097e062aa1703f24e1a476 deb0ef3d09cc535bcd10a8ecc98a8afc0243fdcaf4256b36cc91b5d3e2c3810c]
	I1217 20:01:19.454811  596882 ssh_runner.go:195] Run: which crictl
	I1217 20:01:19.459054  596882 ssh_runner.go:195] Run: which crictl
	I1217 20:01:19.463029  596882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 20:01:19.463111  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 20:01:19.491583  596882 cri.go:89] found id: ""
	I1217 20:01:19.491610  596882 logs.go:282] 0 containers: []
	W1217 20:01:19.491622  596882 logs.go:284] No container was found matching "kindnet"
	I1217 20:01:19.491631  596882 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1217 20:01:19.491688  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 20:01:19.520292  596882 cri.go:89] found id: ""
	I1217 20:01:19.520328  596882 logs.go:282] 0 containers: []
	W1217 20:01:19.520341  596882 logs.go:284] No container was found matching "storage-provisioner"
	I1217 20:01:19.520364  596882 logs.go:123] Gathering logs for kubelet ...
	I1217 20:01:19.520390  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 20:01:19.604632  596882 logs.go:123] Gathering logs for dmesg ...
	I1217 20:01:19.604674  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 20:01:19.621452  596882 logs.go:123] Gathering logs for describe nodes ...
	I1217 20:01:19.621486  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 20:01:19.680554  596882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 20:01:19.680581  596882 logs.go:123] Gathering logs for kube-apiserver [dfcf129a23a9b4b8338549662d03dc9674e70494089b9acbd56ee28dd0e59a2e] ...
	I1217 20:01:19.680597  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 dfcf129a23a9b4b8338549662d03dc9674e70494089b9acbd56ee28dd0e59a2e"
	I1217 20:01:19.712658  596882 logs.go:123] Gathering logs for kube-controller-manager [deb0ef3d09cc535bcd10a8ecc98a8afc0243fdcaf4256b36cc91b5d3e2c3810c] ...
	I1217 20:01:19.712693  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 deb0ef3d09cc535bcd10a8ecc98a8afc0243fdcaf4256b36cc91b5d3e2c3810c"
	I1217 20:01:19.740964  596882 logs.go:123] Gathering logs for container status ...
	I1217 20:01:19.740997  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 20:01:19.773014  596882 logs.go:123] Gathering logs for kube-apiserver [6822d1aff73905867cd00c8bd3d996a8d98a37c238f53bab351d576f0d6b34fc] ...
	I1217 20:01:19.773045  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6822d1aff73905867cd00c8bd3d996a8d98a37c238f53bab351d576f0d6b34fc"
	W1217 20:01:19.802765  596882 logs.go:130] failed kube-apiserver [6822d1aff73905867cd00c8bd3d996a8d98a37c238f53bab351d576f0d6b34fc]: command: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6822d1aff73905867cd00c8bd3d996a8d98a37c238f53bab351d576f0d6b34fc" /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 6822d1aff73905867cd00c8bd3d996a8d98a37c238f53bab351d576f0d6b34fc": Process exited with status 1
	stdout:
	
	stderr:
	E1217 20:01:19.800342    5778 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6822d1aff73905867cd00c8bd3d996a8d98a37c238f53bab351d576f0d6b34fc\": container with ID starting with 6822d1aff73905867cd00c8bd3d996a8d98a37c238f53bab351d576f0d6b34fc not found: ID does not exist" containerID="6822d1aff73905867cd00c8bd3d996a8d98a37c238f53bab351d576f0d6b34fc"
	time="2025-12-17T20:01:19Z" level=fatal msg="rpc error: code = NotFound desc = could not find container \"6822d1aff73905867cd00c8bd3d996a8d98a37c238f53bab351d576f0d6b34fc\": container with ID starting with 6822d1aff73905867cd00c8bd3d996a8d98a37c238f53bab351d576f0d6b34fc not found: ID does not exist"
	 output: 
	** stderr ** 
	E1217 20:01:19.800342    5778 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6822d1aff73905867cd00c8bd3d996a8d98a37c238f53bab351d576f0d6b34fc\": container with ID starting with 6822d1aff73905867cd00c8bd3d996a8d98a37c238f53bab351d576f0d6b34fc not found: ID does not exist" containerID="6822d1aff73905867cd00c8bd3d996a8d98a37c238f53bab351d576f0d6b34fc"
	time="2025-12-17T20:01:19Z" level=fatal msg="rpc error: code = NotFound desc = could not find container \"6822d1aff73905867cd00c8bd3d996a8d98a37c238f53bab351d576f0d6b34fc\": container with ID starting with 6822d1aff73905867cd00c8bd3d996a8d98a37c238f53bab351d576f0d6b34fc not found: ID does not exist"
	
	** /stderr **
	I1217 20:01:19.802797  596882 logs.go:123] Gathering logs for kube-scheduler [26afbca819064c614a7c269e4fbe3f73beb12920c9989c7a9adca8a87b8aee29] ...
	I1217 20:01:19.802814  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 26afbca819064c614a7c269e4fbe3f73beb12920c9989c7a9adca8a87b8aee29"
	I1217 20:01:19.830245  596882 logs.go:123] Gathering logs for kube-controller-manager [711081a1b65cc9754b1a9b8fd19fce7769b6a8e65b097e062aa1703f24e1a476] ...
	I1217 20:01:19.830272  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 711081a1b65cc9754b1a9b8fd19fce7769b6a8e65b097e062aa1703f24e1a476"
	I1217 20:01:19.857816  596882 logs.go:123] Gathering logs for CRI-O ...
	I1217 20:01:19.857846  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	W1217 20:01:19.879976  631473 node_ready.go:57] node "default-k8s-diff-port-759234" has "Ready":"False" status (will retry)
	W1217 20:01:21.880734  631473 node_ready.go:57] node "default-k8s-diff-port-759234" has "Ready":"False" status (will retry)
	I1217 20:01:23.380865  631473 node_ready.go:49] node "default-k8s-diff-port-759234" is "Ready"
	I1217 20:01:23.380895  631473 node_ready.go:38] duration metric: took 12.003962738s for node "default-k8s-diff-port-759234" to be "Ready" ...
	I1217 20:01:23.380915  631473 api_server.go:52] waiting for apiserver process to appear ...
	I1217 20:01:23.380972  631473 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:01:23.396369  631473 api_server.go:72] duration metric: took 12.418505942s to wait for apiserver process to appear ...
	I1217 20:01:23.396398  631473 api_server.go:88] waiting for apiserver healthz status ...
	I1217 20:01:23.396420  631473 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8444/healthz ...
	I1217 20:01:23.403019  631473 api_server.go:279] https://192.168.94.2:8444/healthz returned 200:
	ok
	I1217 20:01:23.404215  631473 api_server.go:141] control plane version: v1.34.3
	I1217 20:01:23.404252  631473 api_server.go:131] duration metric: took 7.845679ms to wait for apiserver health ...
	I1217 20:01:23.404264  631473 system_pods.go:43] waiting for kube-system pods to appear ...
	I1217 20:01:23.408871  631473 system_pods.go:59] 8 kube-system pods found
	I1217 20:01:23.408924  631473 system_pods.go:61] "coredns-66bc5c9577-lv4jd" [a17149a4-0ee9-41fb-96d8-42931da4569f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 20:01:23.408938  631473 system_pods.go:61] "etcd-default-k8s-diff-port-759234" [eacddd1e-e222-4965-a08d-f4dcafd96988] Running
	I1217 20:01:23.408953  631473 system_pods.go:61] "kindnet-dcwlb" [36f8201b-9363-43f5-8a85-e9291ee817a3] Running
	I1217 20:01:23.408969  631473 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-759234" [158006e8-bde6-4d3a-95c5-fa802ff24d99] Running
	I1217 20:01:23.408975  631473 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-759234" [ee3ad334-3e70-45cb-8f7c-49a4bff823ae] Running
	I1217 20:01:23.408981  631473 system_pods.go:61] "kube-proxy-ztxcd" [a079c536-3edc-4c6e-b4a0-2cd7c0aa432f] Running
	I1217 20:01:23.408986  631473 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-759234" [9b1fed92-db91-4b2c-9713-37802f95b030] Running
	I1217 20:01:23.408994  631473 system_pods.go:61] "storage-provisioner" [885e7cc2-77a0-4ba5-be19-0f37c71945f8] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1217 20:01:23.409002  631473 system_pods.go:74] duration metric: took 4.732227ms to wait for pod list to return data ...
	I1217 20:01:23.409014  631473 default_sa.go:34] waiting for default service account to be created ...
	I1217 20:01:23.412304  631473 default_sa.go:45] found service account: "default"
	I1217 20:01:23.412337  631473 default_sa.go:55] duration metric: took 3.315567ms for default service account to be created ...
	I1217 20:01:23.412349  631473 system_pods.go:116] waiting for k8s-apps to be running ...
	I1217 20:01:23.416111  631473 system_pods.go:86] 8 kube-system pods found
	I1217 20:01:23.416150  631473 system_pods.go:89] "coredns-66bc5c9577-lv4jd" [a17149a4-0ee9-41fb-96d8-42931da4569f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 20:01:23.416164  631473 system_pods.go:89] "etcd-default-k8s-diff-port-759234" [eacddd1e-e222-4965-a08d-f4dcafd96988] Running
	I1217 20:01:23.416174  631473 system_pods.go:89] "kindnet-dcwlb" [36f8201b-9363-43f5-8a85-e9291ee817a3] Running
	I1217 20:01:23.416181  631473 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-759234" [158006e8-bde6-4d3a-95c5-fa802ff24d99] Running
	I1217 20:01:23.416186  631473 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-759234" [ee3ad334-3e70-45cb-8f7c-49a4bff823ae] Running
	I1217 20:01:23.416193  631473 system_pods.go:89] "kube-proxy-ztxcd" [a079c536-3edc-4c6e-b4a0-2cd7c0aa432f] Running
	I1217 20:01:23.416198  631473 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-759234" [9b1fed92-db91-4b2c-9713-37802f95b030] Running
	I1217 20:01:23.416211  631473 system_pods.go:89] "storage-provisioner" [885e7cc2-77a0-4ba5-be19-0f37c71945f8] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1217 20:01:23.416247  631473 retry.go:31] will retry after 252.16906ms: missing components: kube-dns
	I1217 20:01:23.672886  631473 system_pods.go:86] 8 kube-system pods found
	I1217 20:01:23.672955  631473 system_pods.go:89] "coredns-66bc5c9577-lv4jd" [a17149a4-0ee9-41fb-96d8-42931da4569f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 20:01:23.672969  631473 system_pods.go:89] "etcd-default-k8s-diff-port-759234" [eacddd1e-e222-4965-a08d-f4dcafd96988] Running
	I1217 20:01:23.672978  631473 system_pods.go:89] "kindnet-dcwlb" [36f8201b-9363-43f5-8a85-e9291ee817a3] Running
	I1217 20:01:23.672988  631473 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-759234" [158006e8-bde6-4d3a-95c5-fa802ff24d99] Running
	I1217 20:01:23.672994  631473 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-759234" [ee3ad334-3e70-45cb-8f7c-49a4bff823ae] Running
	I1217 20:01:23.672998  631473 system_pods.go:89] "kube-proxy-ztxcd" [a079c536-3edc-4c6e-b4a0-2cd7c0aa432f] Running
	I1217 20:01:23.673001  631473 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-759234" [9b1fed92-db91-4b2c-9713-37802f95b030] Running
	I1217 20:01:23.673006  631473 system_pods.go:89] "storage-provisioner" [885e7cc2-77a0-4ba5-be19-0f37c71945f8] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1217 20:01:23.673022  631473 retry.go:31] will retry after 238.606791ms: missing components: kube-dns
	I1217 20:01:23.916617  631473 system_pods.go:86] 8 kube-system pods found
	I1217 20:01:23.916664  631473 system_pods.go:89] "coredns-66bc5c9577-lv4jd" [a17149a4-0ee9-41fb-96d8-42931da4569f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 20:01:23.916673  631473 system_pods.go:89] "etcd-default-k8s-diff-port-759234" [eacddd1e-e222-4965-a08d-f4dcafd96988] Running
	I1217 20:01:23.916683  631473 system_pods.go:89] "kindnet-dcwlb" [36f8201b-9363-43f5-8a85-e9291ee817a3] Running
	I1217 20:01:23.916689  631473 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-759234" [158006e8-bde6-4d3a-95c5-fa802ff24d99] Running
	I1217 20:01:23.916705  631473 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-759234" [ee3ad334-3e70-45cb-8f7c-49a4bff823ae] Running
	I1217 20:01:23.916711  631473 system_pods.go:89] "kube-proxy-ztxcd" [a079c536-3edc-4c6e-b4a0-2cd7c0aa432f] Running
	I1217 20:01:23.916717  631473 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-759234" [9b1fed92-db91-4b2c-9713-37802f95b030] Running
	I1217 20:01:23.916729  631473 system_pods.go:89] "storage-provisioner" [885e7cc2-77a0-4ba5-be19-0f37c71945f8] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1217 20:01:23.916749  631473 retry.go:31] will retry after 358.174281ms: missing components: kube-dns
	I1217 20:01:24.278660  631473 system_pods.go:86] 8 kube-system pods found
	I1217 20:01:24.278692  631473 system_pods.go:89] "coredns-66bc5c9577-lv4jd" [a17149a4-0ee9-41fb-96d8-42931da4569f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 20:01:24.278700  631473 system_pods.go:89] "etcd-default-k8s-diff-port-759234" [eacddd1e-e222-4965-a08d-f4dcafd96988] Running
	I1217 20:01:24.278710  631473 system_pods.go:89] "kindnet-dcwlb" [36f8201b-9363-43f5-8a85-e9291ee817a3] Running
	I1217 20:01:24.278716  631473 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-759234" [158006e8-bde6-4d3a-95c5-fa802ff24d99] Running
	I1217 20:01:24.278725  631473 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-759234" [ee3ad334-3e70-45cb-8f7c-49a4bff823ae] Running
	I1217 20:01:24.278731  631473 system_pods.go:89] "kube-proxy-ztxcd" [a079c536-3edc-4c6e-b4a0-2cd7c0aa432f] Running
	I1217 20:01:24.278740  631473 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-759234" [9b1fed92-db91-4b2c-9713-37802f95b030] Running
	I1217 20:01:24.278748  631473 system_pods.go:89] "storage-provisioner" [885e7cc2-77a0-4ba5-be19-0f37c71945f8] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1217 20:01:24.278768  631473 retry.go:31] will retry after 498.584028ms: missing components: kube-dns
	I1217 20:01:24.782450  631473 system_pods.go:86] 8 kube-system pods found
	I1217 20:01:24.782483  631473 system_pods.go:89] "coredns-66bc5c9577-lv4jd" [a17149a4-0ee9-41fb-96d8-42931da4569f] Running
	I1217 20:01:24.782492  631473 system_pods.go:89] "etcd-default-k8s-diff-port-759234" [eacddd1e-e222-4965-a08d-f4dcafd96988] Running
	I1217 20:01:24.782500  631473 system_pods.go:89] "kindnet-dcwlb" [36f8201b-9363-43f5-8a85-e9291ee817a3] Running
	I1217 20:01:24.782506  631473 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-759234" [158006e8-bde6-4d3a-95c5-fa802ff24d99] Running
	I1217 20:01:24.782512  631473 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-759234" [ee3ad334-3e70-45cb-8f7c-49a4bff823ae] Running
	I1217 20:01:24.782517  631473 system_pods.go:89] "kube-proxy-ztxcd" [a079c536-3edc-4c6e-b4a0-2cd7c0aa432f] Running
	I1217 20:01:24.782533  631473 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-759234" [9b1fed92-db91-4b2c-9713-37802f95b030] Running
	I1217 20:01:24.782544  631473 system_pods.go:89] "storage-provisioner" [885e7cc2-77a0-4ba5-be19-0f37c71945f8] Running
	I1217 20:01:24.782556  631473 system_pods.go:126] duration metric: took 1.37019735s to wait for k8s-apps to be running ...
	I1217 20:01:24.782564  631473 system_svc.go:44] waiting for kubelet service to be running ....
	I1217 20:01:24.782622  631473 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 20:01:24.798294  631473 system_svc.go:56] duration metric: took 15.71601ms WaitForService to wait for kubelet
	I1217 20:01:24.798333  631473 kubeadm.go:587] duration metric: took 13.820474906s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1217 20:01:24.798361  631473 node_conditions.go:102] verifying NodePressure condition ...
	I1217 20:01:24.801773  631473 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1217 20:01:24.801810  631473 node_conditions.go:123] node cpu capacity is 8
	I1217 20:01:24.801830  631473 node_conditions.go:105] duration metric: took 3.463259ms to run NodePressure ...
	I1217 20:01:24.801848  631473 start.go:242] waiting for startup goroutines ...
	I1217 20:01:24.801858  631473 start.go:247] waiting for cluster config update ...
	I1217 20:01:24.801871  631473 start.go:256] writing updated cluster config ...
	I1217 20:01:24.802207  631473 ssh_runner.go:195] Run: rm -f paused
	I1217 20:01:24.806303  631473 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1217 20:01:24.809672  631473 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-lv4jd" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:01:24.813827  631473 pod_ready.go:94] pod "coredns-66bc5c9577-lv4jd" is "Ready"
	I1217 20:01:24.813858  631473 pod_ready.go:86] duration metric: took 4.160994ms for pod "coredns-66bc5c9577-lv4jd" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:01:24.815930  631473 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-759234" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:01:24.819912  631473 pod_ready.go:94] pod "etcd-default-k8s-diff-port-759234" is "Ready"
	I1217 20:01:24.819940  631473 pod_ready.go:86] duration metric: took 3.984122ms for pod "etcd-default-k8s-diff-port-759234" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:01:24.821950  631473 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-759234" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:01:24.825892  631473 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-759234" is "Ready"
	I1217 20:01:24.825916  631473 pod_ready.go:86] duration metric: took 3.940319ms for pod "kube-apiserver-default-k8s-diff-port-759234" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:01:24.827734  631473 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-759234" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:01:25.211123  631473 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-759234" is "Ready"
	I1217 20:01:25.211172  631473 pod_ready.go:86] duration metric: took 383.411453ms for pod "kube-controller-manager-default-k8s-diff-port-759234" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:01:25.410944  631473 pod_ready.go:83] waiting for pod "kube-proxy-ztxcd" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:01:25.811717  631473 pod_ready.go:94] pod "kube-proxy-ztxcd" is "Ready"
	I1217 20:01:25.811745  631473 pod_ready.go:86] duration metric: took 400.768219ms for pod "kube-proxy-ztxcd" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:01:26.012163  631473 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-759234" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:01:26.411031  631473 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-759234" is "Ready"
	I1217 20:01:26.411060  631473 pod_ready.go:86] duration metric: took 398.865204ms for pod "kube-scheduler-default-k8s-diff-port-759234" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:01:26.411102  631473 pod_ready.go:40] duration metric: took 1.604735287s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1217 20:01:26.468761  631473 start.go:625] kubectl: 1.35.0, cluster: 1.34.3 (minor skew: 1)
	I1217 20:01:26.470775  631473 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-759234" cluster and "default" namespace by default
	I1217 20:01:22.408958  596882 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1217 20:01:22.409448  596882 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 20:01:22.409527  596882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 20:01:22.409608  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 20:01:22.437525  596882 cri.go:89] found id: "dfcf129a23a9b4b8338549662d03dc9674e70494089b9acbd56ee28dd0e59a2e"
	I1217 20:01:22.437544  596882 cri.go:89] found id: ""
	I1217 20:01:22.437552  596882 logs.go:282] 1 containers: [dfcf129a23a9b4b8338549662d03dc9674e70494089b9acbd56ee28dd0e59a2e]
	I1217 20:01:22.437601  596882 ssh_runner.go:195] Run: which crictl
	I1217 20:01:22.441667  596882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 20:01:22.441738  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 20:01:22.469984  596882 cri.go:89] found id: ""
	I1217 20:01:22.470013  596882 logs.go:282] 0 containers: []
	W1217 20:01:22.470025  596882 logs.go:284] No container was found matching "etcd"
	I1217 20:01:22.470032  596882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 20:01:22.470111  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 20:01:22.498303  596882 cri.go:89] found id: ""
	I1217 20:01:22.498331  596882 logs.go:282] 0 containers: []
	W1217 20:01:22.498342  596882 logs.go:284] No container was found matching "coredns"
	I1217 20:01:22.498348  596882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 20:01:22.498418  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 20:01:22.527470  596882 cri.go:89] found id: "26afbca819064c614a7c269e4fbe3f73beb12920c9989c7a9adca8a87b8aee29"
	I1217 20:01:22.527491  596882 cri.go:89] found id: ""
	I1217 20:01:22.527502  596882 logs.go:282] 1 containers: [26afbca819064c614a7c269e4fbe3f73beb12920c9989c7a9adca8a87b8aee29]
	I1217 20:01:22.527576  596882 ssh_runner.go:195] Run: which crictl
	I1217 20:01:22.531711  596882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 20:01:22.531778  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 20:01:22.564599  596882 cri.go:89] found id: ""
	I1217 20:01:22.564630  596882 logs.go:282] 0 containers: []
	W1217 20:01:22.564643  596882 logs.go:284] No container was found matching "kube-proxy"
	I1217 20:01:22.564651  596882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 20:01:22.564732  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 20:01:22.595111  596882 cri.go:89] found id: "711081a1b65cc9754b1a9b8fd19fce7769b6a8e65b097e062aa1703f24e1a476"
	I1217 20:01:22.595136  596882 cri.go:89] found id: "deb0ef3d09cc535bcd10a8ecc98a8afc0243fdcaf4256b36cc91b5d3e2c3810c"
	I1217 20:01:22.595142  596882 cri.go:89] found id: ""
	I1217 20:01:22.595151  596882 logs.go:282] 2 containers: [711081a1b65cc9754b1a9b8fd19fce7769b6a8e65b097e062aa1703f24e1a476 deb0ef3d09cc535bcd10a8ecc98a8afc0243fdcaf4256b36cc91b5d3e2c3810c]
	I1217 20:01:22.595222  596882 ssh_runner.go:195] Run: which crictl
	I1217 20:01:22.599467  596882 ssh_runner.go:195] Run: which crictl
	I1217 20:01:22.603371  596882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 20:01:22.603446  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 20:01:22.636370  596882 cri.go:89] found id: ""
	I1217 20:01:22.636400  596882 logs.go:282] 0 containers: []
	W1217 20:01:22.636413  596882 logs.go:284] No container was found matching "kindnet"
	I1217 20:01:22.636421  596882 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1217 20:01:22.636489  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 20:01:22.670436  596882 cri.go:89] found id: ""
	I1217 20:01:22.670473  596882 logs.go:282] 0 containers: []
	W1217 20:01:22.670484  596882 logs.go:284] No container was found matching "storage-provisioner"
	I1217 20:01:22.670500  596882 logs.go:123] Gathering logs for dmesg ...
	I1217 20:01:22.670512  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 20:01:22.692632  596882 logs.go:123] Gathering logs for describe nodes ...
	I1217 20:01:22.692672  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 20:01:22.756173  596882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 20:01:22.756198  596882 logs.go:123] Gathering logs for kube-controller-manager [deb0ef3d09cc535bcd10a8ecc98a8afc0243fdcaf4256b36cc91b5d3e2c3810c] ...
	I1217 20:01:22.756215  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 deb0ef3d09cc535bcd10a8ecc98a8afc0243fdcaf4256b36cc91b5d3e2c3810c"
	I1217 20:01:22.786400  596882 logs.go:123] Gathering logs for CRI-O ...
	I1217 20:01:22.786438  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 20:01:22.838249  596882 logs.go:123] Gathering logs for kubelet ...
	I1217 20:01:22.838293  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 20:01:22.927734  596882 logs.go:123] Gathering logs for kube-apiserver [dfcf129a23a9b4b8338549662d03dc9674e70494089b9acbd56ee28dd0e59a2e] ...
	I1217 20:01:22.927770  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 dfcf129a23a9b4b8338549662d03dc9674e70494089b9acbd56ee28dd0e59a2e"
	I1217 20:01:22.965300  596882 logs.go:123] Gathering logs for kube-scheduler [26afbca819064c614a7c269e4fbe3f73beb12920c9989c7a9adca8a87b8aee29] ...
	I1217 20:01:22.965338  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 26afbca819064c614a7c269e4fbe3f73beb12920c9989c7a9adca8a87b8aee29"
	I1217 20:01:22.993969  596882 logs.go:123] Gathering logs for kube-controller-manager [711081a1b65cc9754b1a9b8fd19fce7769b6a8e65b097e062aa1703f24e1a476] ...
	I1217 20:01:22.993993  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 711081a1b65cc9754b1a9b8fd19fce7769b6a8e65b097e062aa1703f24e1a476"
	I1217 20:01:23.030177  596882 logs.go:123] Gathering logs for container status ...
	I1217 20:01:23.030214  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 20:01:25.567164  596882 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1217 20:01:25.567571  596882 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 20:01:25.567632  596882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 20:01:25.567689  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 20:01:25.599759  596882 cri.go:89] found id: "dfcf129a23a9b4b8338549662d03dc9674e70494089b9acbd56ee28dd0e59a2e"
	I1217 20:01:25.599784  596882 cri.go:89] found id: ""
	I1217 20:01:25.599795  596882 logs.go:282] 1 containers: [dfcf129a23a9b4b8338549662d03dc9674e70494089b9acbd56ee28dd0e59a2e]
	I1217 20:01:25.599859  596882 ssh_runner.go:195] Run: which crictl
	I1217 20:01:25.604023  596882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 20:01:25.604098  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 20:01:25.637338  596882 cri.go:89] found id: ""
	I1217 20:01:25.637367  596882 logs.go:282] 0 containers: []
	W1217 20:01:25.637378  596882 logs.go:284] No container was found matching "etcd"
	I1217 20:01:25.637386  596882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 20:01:25.637445  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 20:01:25.674645  596882 cri.go:89] found id: ""
	I1217 20:01:25.674674  596882 logs.go:282] 0 containers: []
	W1217 20:01:25.674731  596882 logs.go:284] No container was found matching "coredns"
	I1217 20:01:25.674750  596882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 20:01:25.674831  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 20:01:25.706811  596882 cri.go:89] found id: "26afbca819064c614a7c269e4fbe3f73beb12920c9989c7a9adca8a87b8aee29"
	I1217 20:01:25.706839  596882 cri.go:89] found id: ""
	I1217 20:01:25.706849  596882 logs.go:282] 1 containers: [26afbca819064c614a7c269e4fbe3f73beb12920c9989c7a9adca8a87b8aee29]
	I1217 20:01:25.706935  596882 ssh_runner.go:195] Run: which crictl
	I1217 20:01:25.710883  596882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 20:01:25.710965  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 20:01:25.739028  596882 cri.go:89] found id: ""
	I1217 20:01:25.739052  596882 logs.go:282] 0 containers: []
	W1217 20:01:25.739060  596882 logs.go:284] No container was found matching "kube-proxy"
	I1217 20:01:25.739066  596882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 20:01:25.739130  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 20:01:25.768776  596882 cri.go:89] found id: "711081a1b65cc9754b1a9b8fd19fce7769b6a8e65b097e062aa1703f24e1a476"
	I1217 20:01:25.768804  596882 cri.go:89] found id: ""
	I1217 20:01:25.768816  596882 logs.go:282] 1 containers: [711081a1b65cc9754b1a9b8fd19fce7769b6a8e65b097e062aa1703f24e1a476]
	I1217 20:01:25.768875  596882 ssh_runner.go:195] Run: which crictl
	I1217 20:01:25.773715  596882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 20:01:25.773798  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 20:01:25.810555  596882 cri.go:89] found id: ""
	I1217 20:01:25.810588  596882 logs.go:282] 0 containers: []
	W1217 20:01:25.810599  596882 logs.go:284] No container was found matching "kindnet"
	I1217 20:01:25.810606  596882 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1217 20:01:25.810663  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 20:01:25.847410  596882 cri.go:89] found id: ""
	I1217 20:01:25.847437  596882 logs.go:282] 0 containers: []
	W1217 20:01:25.847447  596882 logs.go:284] No container was found matching "storage-provisioner"
	I1217 20:01:25.847458  596882 logs.go:123] Gathering logs for kube-scheduler [26afbca819064c614a7c269e4fbe3f73beb12920c9989c7a9adca8a87b8aee29] ...
	I1217 20:01:25.847475  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 26afbca819064c614a7c269e4fbe3f73beb12920c9989c7a9adca8a87b8aee29"
	I1217 20:01:25.882476  596882 logs.go:123] Gathering logs for kube-controller-manager [711081a1b65cc9754b1a9b8fd19fce7769b6a8e65b097e062aa1703f24e1a476] ...
	I1217 20:01:25.882507  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 711081a1b65cc9754b1a9b8fd19fce7769b6a8e65b097e062aa1703f24e1a476"
	I1217 20:01:25.913026  596882 logs.go:123] Gathering logs for CRI-O ...
	I1217 20:01:25.913060  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 20:01:25.976801  596882 logs.go:123] Gathering logs for container status ...
	I1217 20:01:25.976844  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 20:01:26.013986  596882 logs.go:123] Gathering logs for kubelet ...
	I1217 20:01:26.014013  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 20:01:26.120455  596882 logs.go:123] Gathering logs for dmesg ...
	I1217 20:01:26.120488  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 20:01:26.139124  596882 logs.go:123] Gathering logs for describe nodes ...
	I1217 20:01:26.139159  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 20:01:26.206646  596882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 20:01:26.206672  596882 logs.go:123] Gathering logs for kube-apiserver [dfcf129a23a9b4b8338549662d03dc9674e70494089b9acbd56ee28dd0e59a2e] ...
	I1217 20:01:26.206692  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 dfcf129a23a9b4b8338549662d03dc9674e70494089b9acbd56ee28dd0e59a2e"
	
	
	==> CRI-O <==
	Dec 17 20:00:53 old-k8s-version-894575 crio[569]: time="2025-12-17T20:00:53.330714955Z" level=info msg="Started container" PID=1740 containerID=66f27a1cc9b649019a571f7ba9e5a7ceb6356098743d0b857d825bd8df809387 description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-5hjsp/dashboard-metrics-scraper id=0c0265a2-b75a-40ed-ac4c-eb8098f422d1 name=/runtime.v1.RuntimeService/StartContainer sandboxID=40425cb0fde9ee8d85f21bb137c410e64765d8ece68848c3e3ed94ea57e56ba9
	Dec 17 20:00:54 old-k8s-version-894575 crio[569]: time="2025-12-17T20:00:54.285423666Z" level=info msg="Removing container: c33fb87cb51628bc9612395483e504a89240391a0076300e43ff9e5c0a7be036" id=3fe17c36-dea4-4ba1-b602-6be600c26069 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 17 20:00:54 old-k8s-version-894575 crio[569]: time="2025-12-17T20:00:54.295531789Z" level=info msg="Removed container c33fb87cb51628bc9612395483e504a89240391a0076300e43ff9e5c0a7be036: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-5hjsp/dashboard-metrics-scraper" id=3fe17c36-dea4-4ba1-b602-6be600c26069 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 17 20:01:05 old-k8s-version-894575 crio[569]: time="2025-12-17T20:01:05.313914963Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=92ea9391-8b27-4e33-9581-34fd903fe249 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 20:01:05 old-k8s-version-894575 crio[569]: time="2025-12-17T20:01:05.316020157Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=ca8fc3e9-b245-4c74-9f76-b633d8e962a2 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 20:01:05 old-k8s-version-894575 crio[569]: time="2025-12-17T20:01:05.317581554Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=f7874b29-0a0c-4331-bedb-30cb4e1a1749 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 17 20:01:05 old-k8s-version-894575 crio[569]: time="2025-12-17T20:01:05.317735426Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 20:01:05 old-k8s-version-894575 crio[569]: time="2025-12-17T20:01:05.322597201Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 20:01:05 old-k8s-version-894575 crio[569]: time="2025-12-17T20:01:05.322782153Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/779521eae0643fb583d61062bdbaa1ac73ade81b2c991635f89de4b746dd3145/merged/etc/passwd: no such file or directory"
	Dec 17 20:01:05 old-k8s-version-894575 crio[569]: time="2025-12-17T20:01:05.322809813Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/779521eae0643fb583d61062bdbaa1ac73ade81b2c991635f89de4b746dd3145/merged/etc/group: no such file or directory"
	Dec 17 20:01:05 old-k8s-version-894575 crio[569]: time="2025-12-17T20:01:05.323138775Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 20:01:05 old-k8s-version-894575 crio[569]: time="2025-12-17T20:01:05.366308643Z" level=info msg="Created container 464015c6e96083c6df4b19581746c43903d1b30015e9e8e6a22182712cc3e2da: kube-system/storage-provisioner/storage-provisioner" id=f7874b29-0a0c-4331-bedb-30cb4e1a1749 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 17 20:01:05 old-k8s-version-894575 crio[569]: time="2025-12-17T20:01:05.366939188Z" level=info msg="Starting container: 464015c6e96083c6df4b19581746c43903d1b30015e9e8e6a22182712cc3e2da" id=98f96614-c7b0-4501-9396-270f3c640c30 name=/runtime.v1.RuntimeService/StartContainer
	Dec 17 20:01:05 old-k8s-version-894575 crio[569]: time="2025-12-17T20:01:05.368711176Z" level=info msg="Started container" PID=1754 containerID=464015c6e96083c6df4b19581746c43903d1b30015e9e8e6a22182712cc3e2da description=kube-system/storage-provisioner/storage-provisioner id=98f96614-c7b0-4501-9396-270f3c640c30 name=/runtime.v1.RuntimeService/StartContainer sandboxID=2edc8c4cafe694fa961529bb9164cf0914e5280093ae57b33a8d2a47c8edb95a
	Dec 17 20:01:11 old-k8s-version-894575 crio[569]: time="2025-12-17T20:01:11.189395255Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=cd75b443-5122-449c-88d3-6d4839eb17af name=/runtime.v1.ImageService/ImageStatus
	Dec 17 20:01:11 old-k8s-version-894575 crio[569]: time="2025-12-17T20:01:11.192794917Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=d1424d61-faf8-4319-8deb-f80a09cb877b name=/runtime.v1.ImageService/ImageStatus
	Dec 17 20:01:11 old-k8s-version-894575 crio[569]: time="2025-12-17T20:01:11.194456882Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-5hjsp/dashboard-metrics-scraper" id=14dfc5c7-0541-4d01-8c00-7e0e4aa67951 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 17 20:01:11 old-k8s-version-894575 crio[569]: time="2025-12-17T20:01:11.194611308Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 20:01:11 old-k8s-version-894575 crio[569]: time="2025-12-17T20:01:11.205445404Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 20:01:11 old-k8s-version-894575 crio[569]: time="2025-12-17T20:01:11.206212472Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 20:01:11 old-k8s-version-894575 crio[569]: time="2025-12-17T20:01:11.250993166Z" level=info msg="Created container 294d1768cc9371cf9e11f88d1708895d4e38b481f60bc8fc77e44ab1fb18b5ff: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-5hjsp/dashboard-metrics-scraper" id=14dfc5c7-0541-4d01-8c00-7e0e4aa67951 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 17 20:01:11 old-k8s-version-894575 crio[569]: time="2025-12-17T20:01:11.252981831Z" level=info msg="Starting container: 294d1768cc9371cf9e11f88d1708895d4e38b481f60bc8fc77e44ab1fb18b5ff" id=11cdb0b3-f2db-4b24-9941-0169dc938f3a name=/runtime.v1.RuntimeService/StartContainer
	Dec 17 20:01:11 old-k8s-version-894575 crio[569]: time="2025-12-17T20:01:11.255288122Z" level=info msg="Started container" PID=1773 containerID=294d1768cc9371cf9e11f88d1708895d4e38b481f60bc8fc77e44ab1fb18b5ff description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-5hjsp/dashboard-metrics-scraper id=11cdb0b3-f2db-4b24-9941-0169dc938f3a name=/runtime.v1.RuntimeService/StartContainer sandboxID=40425cb0fde9ee8d85f21bb137c410e64765d8ece68848c3e3ed94ea57e56ba9
	Dec 17 20:01:11 old-k8s-version-894575 crio[569]: time="2025-12-17T20:01:11.336176576Z" level=info msg="Removing container: 66f27a1cc9b649019a571f7ba9e5a7ceb6356098743d0b857d825bd8df809387" id=c466adae-b5b4-4518-9c57-dae981e76481 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 17 20:01:11 old-k8s-version-894575 crio[569]: time="2025-12-17T20:01:11.349011952Z" level=info msg="Removed container 66f27a1cc9b649019a571f7ba9e5a7ceb6356098743d0b857d825bd8df809387: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-5hjsp/dashboard-metrics-scraper" id=c466adae-b5b4-4518-9c57-dae981e76481 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                              NAMESPACE
	294d1768cc937       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           17 seconds ago      Exited              dashboard-metrics-scraper   2                   40425cb0fde9e       dashboard-metrics-scraper-5f989dc9cf-5hjsp       kubernetes-dashboard
	464015c6e9608       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           23 seconds ago      Running             storage-provisioner         1                   2edc8c4cafe69       storage-provisioner                              kube-system
	75a986f0ae8c3       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   38 seconds ago      Running             kubernetes-dashboard        0                   e02d0f1da3335       kubernetes-dashboard-8694d4445c-jb6px            kubernetes-dashboard
	ab6e1c127ed17       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                           54 seconds ago      Running             coredns                     0                   a450ad554f409       coredns-5dd5756b68-gbhs5                         kube-system
	241b33f7c414a       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           54 seconds ago      Running             busybox                     1                   1e67ad478e572       busybox                                          default
	780e65a762a10       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           54 seconds ago      Exited              storage-provisioner         0                   2edc8c4cafe69       storage-provisioner                              kube-system
	71ddc80929603       ea1030da44aa18666a7bf15fddd2a38c3143c3277159cb8bdd95f45c8ce62d7a                                           54 seconds ago      Running             kube-proxy                  0                   d89f825e36d9e       kube-proxy-bdzb6                                 kube-system
	3f0565e2bdcd7       4921d7a6dffa922dd679732ba4797085c4f39e9a53bee8b6fdb1d463e8571251                                           54 seconds ago      Running             kindnet-cni                 0                   75b79f69f05ae       kindnet-p8d9f                                    kube-system
	484a1e94925a1       bb5e0dde9054c02d6badee88547be7e7bb7b7b818d277c8a61b4b29484bbff95                                           57 seconds ago      Running             kube-apiserver              0                   76d00afd11470       kube-apiserver-old-k8s-version-894575            kube-system
	71cce81b2a47a       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                           57 seconds ago      Running             etcd                        0                   79ef6facea046       etcd-old-k8s-version-894575                      kube-system
	467ab50d14f76       4be79c38a4bab6e1252a35697500e8a0d9c5c7c771d9fcc1935c9a7f6cdf4c62                                           57 seconds ago      Running             kube-controller-manager     0                   25967367ece24       kube-controller-manager-old-k8s-version-894575   kube-system
	80c6fccb8bdf5       f6f496300a2ae7a6727ccf3080d66d2fd22b6cfc271df5351c976c23a28bb157                                           57 seconds ago      Running             kube-scheduler              0                   76c7f66b3532d       kube-scheduler-old-k8s-version-894575            kube-system
	
	
	==> coredns [ab6e1c127ed17a202d26f0686d15d1e8d81c83b2f3e4ee38703fa2ce3aee6ce2] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 8aa94104b4dae56b00431f7362ac05b997af2246775de35dc2eb361b0707b2fa7199f9ddfdba27fdef1331b76d09c41700f6cb5d00836dabab7c0df8e651283f
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:44913 - 16568 "HINFO IN 4519636891788960163.4803829897480741640. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.015751219s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               old-k8s-version-894575
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-894575
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2e96f676eb7e96389e85fe0658a4ede4c4ba6924
	                    minikube.k8s.io/name=old-k8s-version-894575
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_17T19_59_29_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Dec 2025 19:59:25 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-894575
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Dec 2025 20:01:15 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Dec 2025 20:01:04 +0000   Wed, 17 Dec 2025 19:59:25 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Dec 2025 20:01:04 +0000   Wed, 17 Dec 2025 19:59:25 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Dec 2025 20:01:04 +0000   Wed, 17 Dec 2025 19:59:25 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Dec 2025 20:01:04 +0000   Wed, 17 Dec 2025 19:59:54 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    old-k8s-version-894575
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 99cc213c06a11cdf07b2a4d26942818a
	  System UUID:                f9507002-721b-4e21-9c9c-8a3faf234561
	  Boot ID:                    832664c8-407a-4bff-a432-3bbc3f20421e
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         91s
	  kube-system                 coredns-5dd5756b68-gbhs5                          100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     106s
	  kube-system                 etcd-old-k8s-version-894575                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         2m
	  kube-system                 kindnet-p8d9f                                     100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      106s
	  kube-system                 kube-apiserver-old-k8s-version-894575             250m (3%)     0 (0%)      0 (0%)           0 (0%)         2m
	  kube-system                 kube-controller-manager-old-k8s-version-894575    200m (2%)     0 (0%)      0 (0%)           0 (0%)         2m2s
	  kube-system                 kube-proxy-bdzb6                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         106s
	  kube-system                 kube-scheduler-old-k8s-version-894575             100m (1%)     0 (0%)      0 (0%)           0 (0%)         2m
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         105s
	  kubernetes-dashboard        dashboard-metrics-scraper-5f989dc9cf-5hjsp        0 (0%)        0 (0%)      0 (0%)           0 (0%)         42s
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-jb6px             0 (0%)        0 (0%)      0 (0%)           0 (0%)         42s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 106s                 kube-proxy       
	  Normal  Starting                 54s                  kube-proxy       
	  Normal  Starting                 2m7s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m6s (x8 over 2m7s)  kubelet          Node old-k8s-version-894575 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m6s (x8 over 2m7s)  kubelet          Node old-k8s-version-894575 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m6s (x8 over 2m7s)  kubelet          Node old-k8s-version-894575 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    2m                   kubelet          Node old-k8s-version-894575 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  2m                   kubelet          Node old-k8s-version-894575 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     2m                   kubelet          Node old-k8s-version-894575 status is now: NodeHasSufficientPID
	  Normal  Starting                 2m                   kubelet          Starting kubelet.
	  Normal  RegisteredNode           107s                 node-controller  Node old-k8s-version-894575 event: Registered Node old-k8s-version-894575 in Controller
	  Normal  NodeReady                94s                  kubelet          Node old-k8s-version-894575 status is now: NodeReady
	  Normal  Starting                 57s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  57s (x8 over 57s)    kubelet          Node old-k8s-version-894575 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    57s (x8 over 57s)    kubelet          Node old-k8s-version-894575 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     57s (x8 over 57s)    kubelet          Node old-k8s-version-894575 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           42s                  node-controller  Node old-k8s-version-894575 event: Registered Node old-k8s-version-894575 in Controller
	
	
	==> dmesg <==
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 02 bf cf fd 8a f3 08 06
	[  +0.000372] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 46 d7 50 f9 50 96 08 06
	[Dec17 19:26] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000011] ll header: 00000000: 12 b8 6e 1b fb 93 de a2 46 23 bd 1e 08 00
	[  +1.015318] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 12 b8 6e 1b fb 93 de a2 46 23 bd 1e 08 00
	[  +1.023837] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 12 b8 6e 1b fb 93 de a2 46 23 bd 1e 08 00
	[  +1.023872] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 12 b8 6e 1b fb 93 de a2 46 23 bd 1e 08 00
	[  +1.023881] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 12 b8 6e 1b fb 93 de a2 46 23 bd 1e 08 00
	[  +1.023899] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 12 b8 6e 1b fb 93 de a2 46 23 bd 1e 08 00
	[  +2.047807] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: 12 b8 6e 1b fb 93 de a2 46 23 bd 1e 08 00
	[  +4.031540] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000022] ll header: 00000000: 12 b8 6e 1b fb 93 de a2 46 23 bd 1e 08 00
	[  +8.319118] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: 12 b8 6e 1b fb 93 de a2 46 23 bd 1e 08 00
	[ +16.382218] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 12 b8 6e 1b fb 93 de a2 46 23 bd 1e 08 00
	[Dec17 19:27] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 12 b8 6e 1b fb 93 de a2 46 23 bd 1e 08 00
	
	
	==> etcd [71cce81b2a47a327a9532ef2473382c328c9042db27d9361ba053cc1855855f4] <==
	{"level":"info","ts":"2025-12-17T20:00:46.758421Z","caller":"traceutil/trace.go:171","msg":"trace[250100453] transaction","detail":"{read_only:false; response_revision:566; number_of_response:1; }","duration":"186.516699ms","start":"2025-12-17T20:00:46.571889Z","end":"2025-12-17T20:00:46.758406Z","steps":["trace[250100453] 'process raft request'  (duration: 186.411335ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-17T20:00:46.758492Z","caller":"traceutil/trace.go:171","msg":"trace[341412219] transaction","detail":"{read_only:false; response_revision:565; number_of_response:1; }","duration":"186.678933ms","start":"2025-12-17T20:00:46.571801Z","end":"2025-12-17T20:00:46.75848Z","steps":["trace[341412219] 'process raft request'  (duration: 186.333507ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-17T20:00:46.758538Z","caller":"traceutil/trace.go:171","msg":"trace[1109591681] transaction","detail":"{read_only:false; response_revision:568; number_of_response:1; }","duration":"186.0966ms","start":"2025-12-17T20:00:46.572427Z","end":"2025-12-17T20:00:46.758524Z","steps":["trace[1109591681] 'process raft request'  (duration: 185.962038ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-17T20:00:46.758567Z","caller":"traceutil/trace.go:171","msg":"trace[400570659] transaction","detail":"{read_only:false; response_revision:571; number_of_response:1; }","duration":"185.731669ms","start":"2025-12-17T20:00:46.572828Z","end":"2025-12-17T20:00:46.75856Z","steps":["trace[400570659] 'process raft request'  (duration: 185.68739ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-17T20:00:46.758677Z","caller":"traceutil/trace.go:171","msg":"trace[1954442103] transaction","detail":"{read_only:false; response_revision:569; number_of_response:1; }","duration":"186.120677ms","start":"2025-12-17T20:00:46.572545Z","end":"2025-12-17T20:00:46.758666Z","steps":["trace[1954442103] 'process raft request'  (duration: 185.884142ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-17T20:00:46.758686Z","caller":"traceutil/trace.go:171","msg":"trace[1708764686] transaction","detail":"{read_only:false; response_revision:567; number_of_response:1; }","duration":"186.724775ms","start":"2025-12-17T20:00:46.571935Z","end":"2025-12-17T20:00:46.75866Z","steps":["trace[1708764686] 'process raft request'  (duration: 186.426667ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-17T20:00:46.758701Z","caller":"traceutil/trace.go:171","msg":"trace[1773903274] transaction","detail":"{read_only:false; response_revision:570; number_of_response:1; }","duration":"186.07421ms","start":"2025-12-17T20:00:46.572617Z","end":"2025-12-17T20:00:46.758691Z","steps":["trace[1773903274] 'process raft request'  (duration: 185.850934ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-17T20:00:46.879784Z","caller":"traceutil/trace.go:171","msg":"trace[941785200] linearizableReadLoop","detail":"{readStateIndex:597; appliedIndex:596; }","duration":"116.010506ms","start":"2025-12-17T20:00:46.76376Z","end":"2025-12-17T20:00:46.879771Z","steps":["trace[941785200] 'read index received'  (duration: 113.834413ms)","trace[941785200] 'applied index is now lower than readState.Index'  (duration: 2.175477ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-17T20:00:46.879807Z","caller":"traceutil/trace.go:171","msg":"trace[394468456] transaction","detail":"{read_only:false; response_revision:572; number_of_response:1; }","duration":"117.040842ms","start":"2025-12-17T20:00:46.762749Z","end":"2025-12-17T20:00:46.879789Z","steps":["trace[394468456] 'process raft request'  (duration: 114.903255ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-17T20:00:46.879897Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"116.143024ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/limitranges/kubernetes-dashboard/\" range_end:\"/registry/limitranges/kubernetes-dashboard0\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-17T20:00:46.879918Z","caller":"traceutil/trace.go:171","msg":"trace[41405551] range","detail":"{range_begin:/registry/limitranges/kubernetes-dashboard/; range_end:/registry/limitranges/kubernetes-dashboard0; response_count:0; response_revision:572; }","duration":"116.180999ms","start":"2025-12-17T20:00:46.76373Z","end":"2025-12-17T20:00:46.879911Z","steps":["trace[41405551] 'agreement among raft nodes before linearized reading'  (duration: 116.107841ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-17T20:00:46.884773Z","caller":"traceutil/trace.go:171","msg":"trace[2090708288] transaction","detail":"{read_only:false; response_revision:573; number_of_response:1; }","duration":"120.846518ms","start":"2025-12-17T20:00:46.763911Z","end":"2025-12-17T20:00:46.884757Z","steps":["trace[2090708288] 'process raft request'  (duration: 120.672073ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-17T20:00:46.884904Z","caller":"traceutil/trace.go:171","msg":"trace[1689218853] transaction","detail":"{read_only:false; response_revision:575; number_of_response:1; }","duration":"117.886828ms","start":"2025-12-17T20:00:46.767002Z","end":"2025-12-17T20:00:46.884889Z","steps":["trace[1689218853] 'process raft request'  (duration: 117.716391ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-17T20:00:46.884941Z","caller":"traceutil/trace.go:171","msg":"trace[1989545465] transaction","detail":"{read_only:false; response_revision:574; number_of_response:1; }","duration":"120.970356ms","start":"2025-12-17T20:00:46.763953Z","end":"2025-12-17T20:00:46.884924Z","steps":["trace[1989545465] 'process raft request'  (duration: 120.733164ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-17T20:00:47.160917Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"174.855867ms","expected-duration":"100ms","prefix":"","request":"header:<ID:9722597792003784280 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf.1882192424dfb849\" mod_revision:0 > success:<request_put:<key:\"/registry/events/kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf.1882192424dfb849\" value_size:694 lease:499225755149008275 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2025-12-17T20:00:47.161037Z","caller":"traceutil/trace.go:171","msg":"trace[321171456] transaction","detail":"{read_only:false; response_revision:586; number_of_response:1; }","duration":"203.081194ms","start":"2025-12-17T20:00:46.957922Z","end":"2025-12-17T20:00:47.161003Z","steps":["trace[321171456] 'process raft request'  (duration: 28.082186ms)","trace[321171456] 'compare'  (duration: 174.757507ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-17T20:00:47.161116Z","caller":"traceutil/trace.go:171","msg":"trace[1884373908] linearizableReadLoop","detail":"{readStateIndex:611; appliedIndex:610; }","duration":"198.514965ms","start":"2025-12-17T20:00:46.962582Z","end":"2025-12-17T20:00:47.161097Z","steps":["trace[1884373908] 'read index received'  (duration: 23.432436ms)","trace[1884373908] 'applied index is now lower than readState.Index'  (duration: 175.079873ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-17T20:00:47.161129Z","caller":"traceutil/trace.go:171","msg":"trace[1329714290] transaction","detail":"{read_only:false; response_revision:587; number_of_response:1; }","duration":"198.090127ms","start":"2025-12-17T20:00:46.963026Z","end":"2025-12-17T20:00:47.161116Z","steps":["trace[1329714290] 'process raft request'  (duration: 197.977448ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-17T20:00:47.161204Z","caller":"traceutil/trace.go:171","msg":"trace[529436355] transaction","detail":"{read_only:false; response_revision:590; number_of_response:1; }","duration":"195.286099ms","start":"2025-12-17T20:00:46.96591Z","end":"2025-12-17T20:00:47.161196Z","steps":["trace[529436355] 'process raft request'  (duration: 195.252827ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-17T20:00:47.161285Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"198.71227ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kubernetes-dashboard/kubernetes-dashboard-8694d4445c-jb6px\" ","response":"range_response_count:1 size:2849"}
	{"level":"info","ts":"2025-12-17T20:00:47.161323Z","caller":"traceutil/trace.go:171","msg":"trace[1907353273] range","detail":"{range_begin:/registry/pods/kubernetes-dashboard/kubernetes-dashboard-8694d4445c-jb6px; range_end:; response_count:1; response_revision:590; }","duration":"198.761267ms","start":"2025-12-17T20:00:46.962552Z","end":"2025-12-17T20:00:47.161314Z","steps":["trace[1907353273] 'agreement among raft nodes before linearized reading'  (duration: 198.635483ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-17T20:00:47.161326Z","caller":"traceutil/trace.go:171","msg":"trace[2056318030] transaction","detail":"{read_only:false; response_revision:588; number_of_response:1; }","duration":"197.910452ms","start":"2025-12-17T20:00:46.963407Z","end":"2025-12-17T20:00:47.161318Z","steps":["trace[2056318030] 'process raft request'  (duration: 197.651321ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-17T20:00:47.161362Z","caller":"traceutil/trace.go:171","msg":"trace[934537175] transaction","detail":"{read_only:false; response_revision:589; number_of_response:1; }","duration":"195.436062ms","start":"2025-12-17T20:00:46.965913Z","end":"2025-12-17T20:00:47.161349Z","steps":["trace[934537175] 'process raft request'  (duration: 195.199309ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-17T20:00:47.161511Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"196.772273ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/deployments/kubernetes-dashboard/kubernetes-dashboard\" ","response":"range_response_count:1 size:4894"}
	{"level":"info","ts":"2025-12-17T20:00:47.161536Z","caller":"traceutil/trace.go:171","msg":"trace[742560136] range","detail":"{range_begin:/registry/deployments/kubernetes-dashboard/kubernetes-dashboard; range_end:; response_count:1; response_revision:590; }","duration":"196.805088ms","start":"2025-12-17T20:00:46.964724Z","end":"2025-12-17T20:00:47.161529Z","steps":["trace[742560136] 'agreement among raft nodes before linearized reading'  (duration: 196.733641ms)"],"step_count":1}
	
	
	==> kernel <==
	 20:01:28 up  1:43,  0 user,  load average: 3.48, 3.24, 2.33
	Linux old-k8s-version-894575 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [3f0565e2bdcd725f2a285b6794d9cb087b195ddb248255a1410193df892996c7] <==
	I1217 20:00:34.880801       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1217 20:00:34.881200       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1217 20:00:34.881439       1 main.go:148] setting mtu 1500 for CNI 
	I1217 20:00:34.881499       1 main.go:178] kindnetd IP family: "ipv4"
	I1217 20:00:34.881545       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-17T20:00:35Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1217 20:00:35.114742       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1217 20:00:35.176135       1 controller.go:381] "Waiting for informer caches to sync"
	I1217 20:00:35.176226       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1217 20:00:35.176406       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1217 20:00:35.577066       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1217 20:00:35.577191       1 metrics.go:72] Registering metrics
	I1217 20:00:35.577322       1 controller.go:711] "Syncing nftables rules"
	I1217 20:00:45.088169       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1217 20:00:45.088214       1 main.go:301] handling current node
	I1217 20:00:55.088269       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1217 20:00:55.088307       1 main.go:301] handling current node
	I1217 20:01:05.088305       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1217 20:01:05.088394       1 main.go:301] handling current node
	I1217 20:01:15.091232       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1217 20:01:15.091275       1 main.go:301] handling current node
	I1217 20:01:25.087954       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1217 20:01:25.088015       1 main.go:301] handling current node
	
	
	==> kube-apiserver [484a1e94925a1a7ea27bb0e8881ce92d0ba724ee5dc0be0b55aa22d4968fb0f9] <==
	I1217 20:00:33.901263       1 crd_finalizer.go:266] Starting CRDFinalizer
	I1217 20:00:33.981585       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1217 20:00:33.981652       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1217 20:00:33.985051       1 shared_informer.go:318] Caches are synced for configmaps
	I1217 20:00:33.985221       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1217 20:00:33.994290       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1217 20:00:33.995480       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1217 20:00:33.995695       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1217 20:00:34.001215       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1217 20:00:34.001298       1 aggregator.go:166] initial CRD sync complete...
	I1217 20:00:34.001309       1 autoregister_controller.go:141] Starting autoregister controller
	I1217 20:00:34.001316       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1217 20:00:34.001324       1 cache.go:39] Caches are synced for autoregister controller
	I1217 20:00:34.028411       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1217 20:00:34.903148       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1217 20:00:35.345509       1 controller.go:624] quota admission added evaluator for: namespaces
	I1217 20:00:35.394990       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1217 20:00:35.417435       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1217 20:00:35.425215       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1217 20:00:35.438802       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1217 20:00:35.488480       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.99.70.150"}
	I1217 20:00:35.502940       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.103.243.247"}
	I1217 20:00:46.570322       1 controller.go:624] quota admission added evaluator for: endpoints
	I1217 20:00:46.571402       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1217 20:00:46.572194       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [467ab50d14f76d9794b7546e57cbb0eec5d9291e092f5be7dae85296a7ea1b59] <==
	I1217 20:00:46.760679       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set kubernetes-dashboard-8694d4445c to 1"
	I1217 20:00:46.760705       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set dashboard-metrics-scraper-5f989dc9cf to 1"
	I1217 20:00:46.761337       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="104.022µs"
	I1217 20:00:46.808181       1 shared_informer.go:318] Caches are synced for garbage collector
	I1217 20:00:46.808212       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1217 20:00:46.817357       1 shared_informer.go:318] Caches are synced for garbage collector
	I1217 20:00:46.911036       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-8694d4445c-jb6px"
	I1217 20:00:46.911069       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-5f989dc9cf-5hjsp"
	I1217 20:00:46.961997       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="200.746754ms"
	I1217 20:00:46.962451       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="201.33235ms"
	I1217 20:00:47.163370       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="200.876149ms"
	I1217 20:00:47.163510       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="201.467112ms"
	I1217 20:00:47.163559       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="89.338µs"
	I1217 20:00:47.163580       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="47.035µs"
	I1217 20:00:47.175350       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="73.866µs"
	I1217 20:00:47.188128       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="89.901µs"
	I1217 20:00:50.292584       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="8.817926ms"
	I1217 20:00:50.292710       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="72.031µs"
	I1217 20:00:53.292745       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="67.55µs"
	I1217 20:00:54.295770       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="80.065µs"
	I1217 20:00:55.300142       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="176.961µs"
	I1217 20:01:09.299361       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="6.705559ms"
	I1217 20:01:09.299482       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="71.993µs"
	I1217 20:01:11.353450       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="93.738µs"
	I1217 20:01:17.276777       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="102.503µs"
	
	
	==> kube-proxy [71ddc80929603be65503dc71e856358367024bf67d78ffb6c1371882b159eff9] <==
	I1217 20:00:34.730661       1 server_others.go:69] "Using iptables proxy"
	I1217 20:00:34.747455       1 node.go:141] Successfully retrieved node IP: 192.168.85.2
	I1217 20:00:34.807177       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1217 20:00:34.810928       1 server_others.go:152] "Using iptables Proxier"
	I1217 20:00:34.810968       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1217 20:00:34.810976       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1217 20:00:34.811009       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1217 20:00:34.817360       1 server.go:846] "Version info" version="v1.28.0"
	I1217 20:00:34.817390       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1217 20:00:34.820266       1 config.go:315] "Starting node config controller"
	I1217 20:00:34.820293       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1217 20:00:34.820740       1 config.go:188] "Starting service config controller"
	I1217 20:00:34.820762       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1217 20:00:34.820787       1 config.go:97] "Starting endpoint slice config controller"
	I1217 20:00:34.820801       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1217 20:00:34.920643       1 shared_informer.go:318] Caches are synced for node config
	I1217 20:00:34.920939       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1217 20:00:34.921028       1 shared_informer.go:318] Caches are synced for service config
	
	
	==> kube-scheduler [80c6fccb8bdf5504ced354de5e08d38c6385613976d63820be5bf2822f675a3d] <==
	I1217 20:00:32.226283       1 serving.go:348] Generated self-signed cert in-memory
	W1217 20:00:33.971389       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1217 20:00:33.971446       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system": RBAC: [clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found]
	W1217 20:00:33.971480       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1217 20:00:33.971495       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1217 20:00:34.004448       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.0"
	I1217 20:00:34.004530       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1217 20:00:34.007123       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1217 20:00:34.007223       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1217 20:00:34.008209       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1217 20:00:34.008290       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1217 20:00:34.107424       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Dec 17 20:00:47 old-k8s-version-894575 kubelet[734]: I1217 20:00:47.128824     734 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/fc72ff5c-fb85-4431-a4f5-88e4e1f04888-tmp-volume\") pod \"kubernetes-dashboard-8694d4445c-jb6px\" (UID: \"fc72ff5c-fb85-4431-a4f5-88e4e1f04888\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-jb6px"
	Dec 17 20:00:47 old-k8s-version-894575 kubelet[734]: I1217 20:00:47.128888     734 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/7179f0cb-5d60-4b81-b4cb-c5e37566bc08-tmp-volume\") pod \"dashboard-metrics-scraper-5f989dc9cf-5hjsp\" (UID: \"7179f0cb-5d60-4b81-b4cb-c5e37566bc08\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-5hjsp"
	Dec 17 20:00:47 old-k8s-version-894575 kubelet[734]: I1217 20:00:47.128970     734 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m24xn\" (UniqueName: \"kubernetes.io/projected/fc72ff5c-fb85-4431-a4f5-88e4e1f04888-kube-api-access-m24xn\") pod \"kubernetes-dashboard-8694d4445c-jb6px\" (UID: \"fc72ff5c-fb85-4431-a4f5-88e4e1f04888\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-jb6px"
	Dec 17 20:00:47 old-k8s-version-894575 kubelet[734]: I1217 20:00:47.129006     734 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zfkfb\" (UniqueName: \"kubernetes.io/projected/7179f0cb-5d60-4b81-b4cb-c5e37566bc08-kube-api-access-zfkfb\") pod \"dashboard-metrics-scraper-5f989dc9cf-5hjsp\" (UID: \"7179f0cb-5d60-4b81-b4cb-c5e37566bc08\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-5hjsp"
	Dec 17 20:00:50 old-k8s-version-894575 kubelet[734]: I1217 20:00:50.283963     734 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-jb6px" podStartSLOduration=1.437734223 podCreationTimestamp="2025-12-17 20:00:46 +0000 UTC" firstStartedPulling="2025-12-17 20:00:47.28960755 +0000 UTC m=+16.215365773" lastFinishedPulling="2025-12-17 20:00:50.135754728 +0000 UTC m=+19.061512958" observedRunningTime="2025-12-17 20:00:50.283564089 +0000 UTC m=+19.209322322" watchObservedRunningTime="2025-12-17 20:00:50.283881408 +0000 UTC m=+19.209639641"
	Dec 17 20:00:53 old-k8s-version-894575 kubelet[734]: I1217 20:00:53.279180     734 scope.go:117] "RemoveContainer" containerID="c33fb87cb51628bc9612395483e504a89240391a0076300e43ff9e5c0a7be036"
	Dec 17 20:00:54 old-k8s-version-894575 kubelet[734]: I1217 20:00:54.283949     734 scope.go:117] "RemoveContainer" containerID="c33fb87cb51628bc9612395483e504a89240391a0076300e43ff9e5c0a7be036"
	Dec 17 20:00:54 old-k8s-version-894575 kubelet[734]: I1217 20:00:54.284172     734 scope.go:117] "RemoveContainer" containerID="66f27a1cc9b649019a571f7ba9e5a7ceb6356098743d0b857d825bd8df809387"
	Dec 17 20:00:54 old-k8s-version-894575 kubelet[734]: E1217 20:00:54.284510     734 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-5hjsp_kubernetes-dashboard(7179f0cb-5d60-4b81-b4cb-c5e37566bc08)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-5hjsp" podUID="7179f0cb-5d60-4b81-b4cb-c5e37566bc08"
	Dec 17 20:00:55 old-k8s-version-894575 kubelet[734]: I1217 20:00:55.288412     734 scope.go:117] "RemoveContainer" containerID="66f27a1cc9b649019a571f7ba9e5a7ceb6356098743d0b857d825bd8df809387"
	Dec 17 20:00:55 old-k8s-version-894575 kubelet[734]: E1217 20:00:55.288778     734 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-5hjsp_kubernetes-dashboard(7179f0cb-5d60-4b81-b4cb-c5e37566bc08)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-5hjsp" podUID="7179f0cb-5d60-4b81-b4cb-c5e37566bc08"
	Dec 17 20:00:57 old-k8s-version-894575 kubelet[734]: I1217 20:00:57.267231     734 scope.go:117] "RemoveContainer" containerID="66f27a1cc9b649019a571f7ba9e5a7ceb6356098743d0b857d825bd8df809387"
	Dec 17 20:00:57 old-k8s-version-894575 kubelet[734]: E1217 20:00:57.267519     734 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-5hjsp_kubernetes-dashboard(7179f0cb-5d60-4b81-b4cb-c5e37566bc08)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-5hjsp" podUID="7179f0cb-5d60-4b81-b4cb-c5e37566bc08"
	Dec 17 20:01:05 old-k8s-version-894575 kubelet[734]: I1217 20:01:05.313323     734 scope.go:117] "RemoveContainer" containerID="780e65a762a1065439990615f358a4208007b4713894463341d9a2f8f9b91b33"
	Dec 17 20:01:11 old-k8s-version-894575 kubelet[734]: I1217 20:01:11.188425     734 scope.go:117] "RemoveContainer" containerID="66f27a1cc9b649019a571f7ba9e5a7ceb6356098743d0b857d825bd8df809387"
	Dec 17 20:01:11 old-k8s-version-894575 kubelet[734]: I1217 20:01:11.334368     734 scope.go:117] "RemoveContainer" containerID="66f27a1cc9b649019a571f7ba9e5a7ceb6356098743d0b857d825bd8df809387"
	Dec 17 20:01:11 old-k8s-version-894575 kubelet[734]: I1217 20:01:11.334700     734 scope.go:117] "RemoveContainer" containerID="294d1768cc9371cf9e11f88d1708895d4e38b481f60bc8fc77e44ab1fb18b5ff"
	Dec 17 20:01:11 old-k8s-version-894575 kubelet[734]: E1217 20:01:11.335045     734 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-5hjsp_kubernetes-dashboard(7179f0cb-5d60-4b81-b4cb-c5e37566bc08)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-5hjsp" podUID="7179f0cb-5d60-4b81-b4cb-c5e37566bc08"
	Dec 17 20:01:17 old-k8s-version-894575 kubelet[734]: I1217 20:01:17.266601     734 scope.go:117] "RemoveContainer" containerID="294d1768cc9371cf9e11f88d1708895d4e38b481f60bc8fc77e44ab1fb18b5ff"
	Dec 17 20:01:17 old-k8s-version-894575 kubelet[734]: E1217 20:01:17.266890     734 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-5hjsp_kubernetes-dashboard(7179f0cb-5d60-4b81-b4cb-c5e37566bc08)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-5hjsp" podUID="7179f0cb-5d60-4b81-b4cb-c5e37566bc08"
	Dec 17 20:01:23 old-k8s-version-894575 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 17 20:01:23 old-k8s-version-894575 kubelet[734]: I1217 20:01:23.384547     734 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Dec 17 20:01:23 old-k8s-version-894575 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 17 20:01:23 old-k8s-version-894575 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 20:01:23 old-k8s-version-894575 systemd[1]: kubelet.service: Consumed 1.594s CPU time.
	
	
	==> kubernetes-dashboard [75a986f0ae8c399acd6a7e6fb4b4edd21dd8ecafde18a0e3734080cd5e518d63] <==
	2025/12/17 20:00:50 Starting overwatch
	2025/12/17 20:00:50 Using namespace: kubernetes-dashboard
	2025/12/17 20:00:50 Using in-cluster config to connect to apiserver
	2025/12/17 20:00:50 Using secret token for csrf signing
	2025/12/17 20:00:50 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/12/17 20:00:50 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/12/17 20:00:50 Successful initial request to the apiserver, version: v1.28.0
	2025/12/17 20:00:50 Generating JWE encryption key
	2025/12/17 20:00:50 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/12/17 20:00:50 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/12/17 20:00:50 Initializing JWE encryption key from synchronized object
	2025/12/17 20:00:50 Creating in-cluster Sidecar client
	2025/12/17 20:00:50 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/17 20:00:50 Serving insecurely on HTTP port: 9090
	2025/12/17 20:01:20 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [464015c6e96083c6df4b19581746c43903d1b30015e9e8e6a22182712cc3e2da] <==
	I1217 20:01:05.381524       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1217 20:01:05.390175       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1217 20:01:05.390214       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1217 20:01:22.788587       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1217 20:01:22.788662       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"443e0966-e91f-456b-b43e-a7e2d61f2da7", APIVersion:"v1", ResourceVersion:"652", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-894575_dfdce6b5-33f1-4b4e-869f-53f9ae2d66d2 became leader
	I1217 20:01:22.788755       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-894575_dfdce6b5-33f1-4b4e-869f-53f9ae2d66d2!
	I1217 20:01:22.889009       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-894575_dfdce6b5-33f1-4b4e-869f-53f9ae2d66d2!
	
	
	==> storage-provisioner [780e65a762a1065439990615f358a4208007b4713894463341d9a2f8f9b91b33] <==
	I1217 20:00:34.656300       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1217 20:01:04.660666       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-894575 -n old-k8s-version-894575
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-894575 -n old-k8s-version-894575: exit status 2 (355.517848ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context old-k8s-version-894575 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (6.91s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (3.25s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-759234 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-759234 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (555.366796ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T20:01:34Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-759234 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-759234 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-759234 describe deploy/metrics-server -n kube-system: exit status 1 (211.242524ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-759234 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect default-k8s-diff-port-759234
helpers_test.go:244: (dbg) docker inspect default-k8s-diff-port-759234:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "fb8483ff8e2a14d378d4db3e15e7b37fbb77525e29d99d5e1de222fe462790b8",
	        "Created": "2025-12-17T20:00:47.282778313Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 632387,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-17T20:00:47.336056904Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:e3abeb065413b7566dd42e98e204ab3ad174790743f1f5cd427036c11b49d7f1",
	        "ResolvConfPath": "/var/lib/docker/containers/fb8483ff8e2a14d378d4db3e15e7b37fbb77525e29d99d5e1de222fe462790b8/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/fb8483ff8e2a14d378d4db3e15e7b37fbb77525e29d99d5e1de222fe462790b8/hostname",
	        "HostsPath": "/var/lib/docker/containers/fb8483ff8e2a14d378d4db3e15e7b37fbb77525e29d99d5e1de222fe462790b8/hosts",
	        "LogPath": "/var/lib/docker/containers/fb8483ff8e2a14d378d4db3e15e7b37fbb77525e29d99d5e1de222fe462790b8/fb8483ff8e2a14d378d4db3e15e7b37fbb77525e29d99d5e1de222fe462790b8-json.log",
	        "Name": "/default-k8s-diff-port-759234",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-759234:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default-k8s-diff-port-759234",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "fb8483ff8e2a14d378d4db3e15e7b37fbb77525e29d99d5e1de222fe462790b8",
	                "LowerDir": "/var/lib/docker/overlay2/7843654506f5a98613c1255e49abf23e4cc9d5b1f941075f03bad1d85596baa7-init/diff:/var/lib/docker/overlay2/29727d664a8119dcd8d22d923cfdfa7d86f99088879bf2a113d907b51116eb38/diff",
	                "MergedDir": "/var/lib/docker/overlay2/7843654506f5a98613c1255e49abf23e4cc9d5b1f941075f03bad1d85596baa7/merged",
	                "UpperDir": "/var/lib/docker/overlay2/7843654506f5a98613c1255e49abf23e4cc9d5b1f941075f03bad1d85596baa7/diff",
	                "WorkDir": "/var/lib/docker/overlay2/7843654506f5a98613c1255e49abf23e4cc9d5b1f941075f03bad1d85596baa7/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-759234",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-759234/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-759234",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-759234",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-759234",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "e8c29d1fd4813de6dd30565d18513e235271f688a55a20f0bb448d5eebb1a835",
	            "SandboxKey": "/var/run/docker/netns/e8c29d1fd481",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33453"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33454"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33457"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33455"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33456"
	                    }
	                ]
	            },
	            "Networks": {
	                "default-k8s-diff-port-759234": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "034e5df717c044fefebfa38f3b7a5265a61b576bc983becdb12880ee6b18c027",
	                    "EndpointID": "309bd098cb5d36915134ea1de98f5ceecdeaa23620f424349d50d229879d410d",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "MacAddress": "5e:c4:e7:18:48:b3",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-759234",
	                        "fb8483ff8e2a"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-759234 -n default-k8s-diff-port-759234
helpers_test.go:253: <<< TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-759234 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-759234 logs -n 25: (1.684518602s)
helpers_test.go:261: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────
────────────┐
	│ COMMAND │                                                                                                                        ARGS                                                                                                                        │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────
────────────┤
	│ delete  │ -p NoKubernetes-327438                                                                                                                                                                                                                             │ NoKubernetes-327438          │ jenkins │ v1.37.0 │ 17 Dec 25 19:59 UTC │ 17 Dec 25 19:59 UTC │
	│ delete  │ -p disable-driver-mounts-890254                                                                                                                                                                                                                    │ disable-driver-mounts-890254 │ jenkins │ v1.37.0 │ 17 Dec 25 19:59 UTC │ 17 Dec 25 19:59 UTC │
	│ start   │ -p no-preload-832842 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1                                                                                       │ no-preload-832842            │ jenkins │ v1.37.0 │ 17 Dec 25 19:59 UTC │ 17 Dec 25 19:59 UTC │
	│ addons  │ enable metrics-server -p no-preload-832842 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                            │ no-preload-832842            │ jenkins │ v1.37.0 │ 17 Dec 25 20:00 UTC │                     │
	│ stop    │ -p no-preload-832842 --alsologtostderr -v=3                                                                                                                                                                                                        │ no-preload-832842            │ jenkins │ v1.37.0 │ 17 Dec 25 20:00 UTC │ 17 Dec 25 20:00 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-894575 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ old-k8s-version-894575       │ jenkins │ v1.37.0 │ 17 Dec 25 20:00 UTC │                     │
	│ stop    │ -p old-k8s-version-894575 --alsologtostderr -v=3                                                                                                                                                                                                   │ old-k8s-version-894575       │ jenkins │ v1.37.0 │ 17 Dec 25 20:00 UTC │ 17 Dec 25 20:00 UTC │
	│ addons  │ enable dashboard -p no-preload-832842 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                       │ no-preload-832842            │ jenkins │ v1.37.0 │ 17 Dec 25 20:00 UTC │ 17 Dec 25 20:00 UTC │
	│ start   │ -p no-preload-832842 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1                                                                                       │ no-preload-832842            │ jenkins │ v1.37.0 │ 17 Dec 25 20:00 UTC │ 17 Dec 25 20:01 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-894575 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ old-k8s-version-894575       │ jenkins │ v1.37.0 │ 17 Dec 25 20:00 UTC │ 17 Dec 25 20:00 UTC │
	│ start   │ -p old-k8s-version-894575 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0      │ old-k8s-version-894575       │ jenkins │ v1.37.0 │ 17 Dec 25 20:00 UTC │ 17 Dec 25 20:01 UTC │
	│ start   │ -p cert-expiration-059470 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                          │ cert-expiration-059470       │ jenkins │ v1.37.0 │ 17 Dec 25 20:00 UTC │ 17 Dec 25 20:00 UTC │
	│ delete  │ -p cert-expiration-059470                                                                                                                                                                                                                          │ cert-expiration-059470       │ jenkins │ v1.37.0 │ 17 Dec 25 20:00 UTC │ 17 Dec 25 20:00 UTC │
	│ start   │ -p default-k8s-diff-port-759234 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3                                                                           │ default-k8s-diff-port-759234 │ jenkins │ v1.37.0 │ 17 Dec 25 20:00 UTC │ 17 Dec 25 20:01 UTC │
	│ image   │ no-preload-832842 image list --format=json                                                                                                                                                                                                         │ no-preload-832842            │ jenkins │ v1.37.0 │ 17 Dec 25 20:01 UTC │ 17 Dec 25 20:01 UTC │
	│ pause   │ -p no-preload-832842 --alsologtostderr -v=1                                                                                                                                                                                                        │ no-preload-832842            │ jenkins │ v1.37.0 │ 17 Dec 25 20:01 UTC │                     │
	│ image   │ old-k8s-version-894575 image list --format=json                                                                                                                                                                                                    │ old-k8s-version-894575       │ jenkins │ v1.37.0 │ 17 Dec 25 20:01 UTC │ 17 Dec 25 20:01 UTC │
	│ pause   │ -p old-k8s-version-894575 --alsologtostderr -v=1                                                                                                                                                                                                   │ old-k8s-version-894575       │ jenkins │ v1.37.0 │ 17 Dec 25 20:01 UTC │                     │
	│ delete  │ -p no-preload-832842                                                                                                                                                                                                                               │ no-preload-832842            │ jenkins │ v1.37.0 │ 17 Dec 25 20:01 UTC │ 17 Dec 25 20:01 UTC │
	│ delete  │ -p old-k8s-version-894575                                                                                                                                                                                                                          │ old-k8s-version-894575       │ jenkins │ v1.37.0 │ 17 Dec 25 20:01 UTC │ 17 Dec 25 20:01 UTC │
	│ delete  │ -p no-preload-832842                                                                                                                                                                                                                               │ no-preload-832842            │ jenkins │ v1.37.0 │ 17 Dec 25 20:01 UTC │ 17 Dec 25 20:01 UTC │
	│ start   │ -p newest-cni-420762 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1 │ newest-cni-420762            │ jenkins │ v1.37.0 │ 17 Dec 25 20:01 UTC │                     │
	│ delete  │ -p old-k8s-version-894575                                                                                                                                                                                                                          │ old-k8s-version-894575       │ jenkins │ v1.37.0 │ 17 Dec 25 20:01 UTC │ 17 Dec 25 20:01 UTC │
	│ start   │ -p embed-certs-147021 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3                                                                                             │ embed-certs-147021           │ jenkins │ v1.37.0 │ 17 Dec 25 20:01 UTC │                     │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-759234 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                 │ default-k8s-diff-port-759234 │ jenkins │ v1.37.0 │ 17 Dec 25 20:01 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────
────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/17 20:01:33
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1217 20:01:33.091895  641791 out.go:360] Setting OutFile to fd 1 ...
	I1217 20:01:33.092050  641791 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 20:01:33.092062  641791 out.go:374] Setting ErrFile to fd 2...
	I1217 20:01:33.092069  641791 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 20:01:33.092310  641791 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22186-372245/.minikube/bin
	I1217 20:01:33.092850  641791 out.go:368] Setting JSON to false
	I1217 20:01:33.094021  641791 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":6244,"bootTime":1765995449,"procs":286,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1217 20:01:33.094099  641791 start.go:143] virtualization: kvm guest
	I1217 20:01:33.096247  641791 out.go:179] * [embed-certs-147021] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1217 20:01:33.097457  641791 out.go:179]   - MINIKUBE_LOCATION=22186
	I1217 20:01:33.097511  641791 notify.go:221] Checking for updates...
	I1217 20:01:33.100429  641791 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 20:01:33.102070  641791 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22186-372245/kubeconfig
	I1217 20:01:33.103213  641791 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22186-372245/.minikube
	I1217 20:01:33.104315  641791 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1217 20:01:33.105413  641791 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 20:01:33.107203  641791 config.go:182] Loaded profile config "default-k8s-diff-port-759234": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 20:01:33.107374  641791 config.go:182] Loaded profile config "kubernetes-upgrade-322567": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1217 20:01:33.107519  641791 config.go:182] Loaded profile config "newest-cni-420762": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1217 20:01:33.107648  641791 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 20:01:33.133692  641791 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1217 20:01:33.133791  641791 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 20:01:33.197834  641791 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:62 OomKillDisable:false NGoroutines:81 SystemTime:2025-12-17 20:01:33.187070252 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1217 20:01:33.198041  641791 docker.go:319] overlay module found
	I1217 20:01:33.200929  641791 out.go:179] * Using the docker driver based on user configuration
	I1217 20:01:33.202110  641791 start.go:309] selected driver: docker
	I1217 20:01:33.202127  641791 start.go:927] validating driver "docker" against <nil>
	I1217 20:01:33.202147  641791 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 20:01:33.202718  641791 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 20:01:33.265204  641791 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:62 OomKillDisable:false NGoroutines:81 SystemTime:2025-12-17 20:01:33.255595665 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1217 20:01:33.265495  641791 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1217 20:01:33.265886  641791 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1217 20:01:33.268029  641791 out.go:179] * Using Docker driver with root privileges
	I1217 20:01:33.269314  641791 cni.go:84] Creating CNI manager for ""
	I1217 20:01:33.269379  641791 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1217 20:01:33.269393  641791 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1217 20:01:33.269488  641791 start.go:353] cluster config:
	{Name:embed-certs-147021 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:embed-certs-147021 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPI
D:0 GPUs: AutoPauseInterval:1m0s}
	I1217 20:01:33.274207  641791 out.go:179] * Starting "embed-certs-147021" primary control-plane node in "embed-certs-147021" cluster
	I1217 20:01:33.275606  641791 cache.go:134] Beginning downloading kic base image for docker with crio
	I1217 20:01:33.276920  641791 out.go:179] * Pulling base image v0.0.48-1765966054-22186 ...
	I1217 20:01:33.278131  641791 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1217 20:01:33.278171  641791 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22186-372245/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4
	I1217 20:01:33.278190  641791 cache.go:65] Caching tarball of preloaded images
	I1217 20:01:33.278215  641791 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 in local docker daemon
	I1217 20:01:33.278334  641791 preload.go:238] Found /home/jenkins/minikube-integration/22186-372245/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1217 20:01:33.278352  641791 cache.go:68] Finished verifying existence of preloaded tar for v1.34.3 on crio
	I1217 20:01:33.278492  641791 profile.go:143] Saving config to /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/embed-certs-147021/config.json ...
	I1217 20:01:33.278524  641791 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/embed-certs-147021/config.json: {Name:mk0de7355995fcfbd87b6c9c1a955d58d25dce4f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 20:01:33.300354  641791 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 in local docker daemon, skipping pull
	I1217 20:01:33.300385  641791 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 exists in daemon, skipping load
	I1217 20:01:33.300408  641791 cache.go:243] Successfully downloaded all kic artifacts
	I1217 20:01:33.300451  641791 start.go:360] acquireMachinesLock for embed-certs-147021: {Name:mkc6328ab9d874d1f1fffe133279d94e48b1c6e9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 20:01:33.300584  641791 start.go:364] duration metric: took 106.56µs to acquireMachinesLock for "embed-certs-147021"
	I1217 20:01:33.300620  641791 start.go:93] Provisioning new machine with config: &{Name:embed-certs-147021 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:embed-certs-147021 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1217 20:01:33.300736  641791 start.go:125] createHost starting for "" (driver="docker")
	
	
	==> CRI-O <==
	Dec 17 20:01:23 default-k8s-diff-port-759234 crio[776]: time="2025-12-17T20:01:23.48152379Z" level=info msg="Starting container: 897a598ccd124b2a362ce26c6cf066207e5ffc90c580031772c35e06ea4baccf" id=272d1523-853c-4025-9536-ff87867d9abb name=/runtime.v1.RuntimeService/StartContainer
	Dec 17 20:01:23 default-k8s-diff-port-759234 crio[776]: time="2025-12-17T20:01:23.484352575Z" level=info msg="Started container" PID=1921 containerID=897a598ccd124b2a362ce26c6cf066207e5ffc90c580031772c35e06ea4baccf description=kube-system/coredns-66bc5c9577-lv4jd/coredns id=272d1523-853c-4025-9536-ff87867d9abb name=/runtime.v1.RuntimeService/StartContainer sandboxID=031a1e2c3473c4b4cbe2aa96ea4b0f09086163a912942590088cb00eae68a858
	Dec 17 20:01:27 default-k8s-diff-port-759234 crio[776]: time="2025-12-17T20:01:27.020106308Z" level=info msg="Running pod sandbox: default/busybox/POD" id=5afc3455-f270-47c4-bb59-d0068b7fbe8e name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 17 20:01:27 default-k8s-diff-port-759234 crio[776]: time="2025-12-17T20:01:27.020183091Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 20:01:27 default-k8s-diff-port-759234 crio[776]: time="2025-12-17T20:01:27.025564853Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:c3fd2a3b090f6b348350f08e3dda3e1a6ab96425d85506669d5420b950ce7e25 UID:3f23f224-9b23-48f4-a957-ebc839304940 NetNS:/var/run/netns/6c0732da-914f-4e72-ae7a-4727094dc9b6 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000884658}] Aliases:map[]}"
	Dec 17 20:01:27 default-k8s-diff-port-759234 crio[776]: time="2025-12-17T20:01:27.025607635Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Dec 17 20:01:27 default-k8s-diff-port-759234 crio[776]: time="2025-12-17T20:01:27.03974063Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:c3fd2a3b090f6b348350f08e3dda3e1a6ab96425d85506669d5420b950ce7e25 UID:3f23f224-9b23-48f4-a957-ebc839304940 NetNS:/var/run/netns/6c0732da-914f-4e72-ae7a-4727094dc9b6 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000884658}] Aliases:map[]}"
	Dec 17 20:01:27 default-k8s-diff-port-759234 crio[776]: time="2025-12-17T20:01:27.039913981Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Dec 17 20:01:27 default-k8s-diff-port-759234 crio[776]: time="2025-12-17T20:01:27.04092907Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 17 20:01:27 default-k8s-diff-port-759234 crio[776]: time="2025-12-17T20:01:27.04215206Z" level=info msg="Ran pod sandbox c3fd2a3b090f6b348350f08e3dda3e1a6ab96425d85506669d5420b950ce7e25 with infra container: default/busybox/POD" id=5afc3455-f270-47c4-bb59-d0068b7fbe8e name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 17 20:01:27 default-k8s-diff-port-759234 crio[776]: time="2025-12-17T20:01:27.043586469Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=9975c167-899e-4cca-98e1-704afc4e63bb name=/runtime.v1.ImageService/ImageStatus
	Dec 17 20:01:27 default-k8s-diff-port-759234 crio[776]: time="2025-12-17T20:01:27.043741236Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=9975c167-899e-4cca-98e1-704afc4e63bb name=/runtime.v1.ImageService/ImageStatus
	Dec 17 20:01:27 default-k8s-diff-port-759234 crio[776]: time="2025-12-17T20:01:27.043794911Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=9975c167-899e-4cca-98e1-704afc4e63bb name=/runtime.v1.ImageService/ImageStatus
	Dec 17 20:01:27 default-k8s-diff-port-759234 crio[776]: time="2025-12-17T20:01:27.044564656Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=fb4058d1-a90c-4c50-8c22-2e1e631df8bc name=/runtime.v1.ImageService/PullImage
	Dec 17 20:01:27 default-k8s-diff-port-759234 crio[776]: time="2025-12-17T20:01:27.04976703Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Dec 17 20:01:28 default-k8s-diff-port-759234 crio[776]: time="2025-12-17T20:01:28.297768445Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=fb4058d1-a90c-4c50-8c22-2e1e631df8bc name=/runtime.v1.ImageService/PullImage
	Dec 17 20:01:28 default-k8s-diff-port-759234 crio[776]: time="2025-12-17T20:01:28.298480798Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=19470aba-1cc3-4c0b-a75d-6bbca2ccf28b name=/runtime.v1.ImageService/ImageStatus
	Dec 17 20:01:28 default-k8s-diff-port-759234 crio[776]: time="2025-12-17T20:01:28.299848272Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=23a2af3d-e9d2-49db-b0f5-e20f81ff6dc6 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 20:01:28 default-k8s-diff-port-759234 crio[776]: time="2025-12-17T20:01:28.303137787Z" level=info msg="Creating container: default/busybox/busybox" id=308bd8d1-9d78-49d7-bb83-cd88d24c469c name=/runtime.v1.RuntimeService/CreateContainer
	Dec 17 20:01:28 default-k8s-diff-port-759234 crio[776]: time="2025-12-17T20:01:28.303284974Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 20:01:28 default-k8s-diff-port-759234 crio[776]: time="2025-12-17T20:01:28.306947963Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 20:01:28 default-k8s-diff-port-759234 crio[776]: time="2025-12-17T20:01:28.307607822Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 20:01:28 default-k8s-diff-port-759234 crio[776]: time="2025-12-17T20:01:28.340499574Z" level=info msg="Created container ed161e49baf549eb8762f5dedde797b9b8db4be7be5a56696005ae619702f5ac: default/busybox/busybox" id=308bd8d1-9d78-49d7-bb83-cd88d24c469c name=/runtime.v1.RuntimeService/CreateContainer
	Dec 17 20:01:28 default-k8s-diff-port-759234 crio[776]: time="2025-12-17T20:01:28.341127542Z" level=info msg="Starting container: ed161e49baf549eb8762f5dedde797b9b8db4be7be5a56696005ae619702f5ac" id=c2d46dd5-29e1-4940-8979-981e64396ee3 name=/runtime.v1.RuntimeService/StartContainer
	Dec 17 20:01:28 default-k8s-diff-port-759234 crio[776]: time="2025-12-17T20:01:28.342986993Z" level=info msg="Started container" PID=1995 containerID=ed161e49baf549eb8762f5dedde797b9b8db4be7be5a56696005ae619702f5ac description=default/busybox/busybox id=c2d46dd5-29e1-4940-8979-981e64396ee3 name=/runtime.v1.RuntimeService/StartContainer sandboxID=c3fd2a3b090f6b348350f08e3dda3e1a6ab96425d85506669d5420b950ce7e25
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                                    NAMESPACE
	ed161e49baf54       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   7 seconds ago       Running             busybox                   0                   c3fd2a3b090f6       busybox                                                default
	897a598ccd124       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      12 seconds ago      Running             coredns                   0                   031a1e2c3473c       coredns-66bc5c9577-lv4jd                               kube-system
	49f3e1b5f6b25       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      12 seconds ago      Running             storage-provisioner       0                   ad4e69aa978b0       storage-provisioner                                    kube-system
	1f010fc668721       docker.io/kindest/kindnetd@sha256:7c22558dc06a570d46ea6e8a73b23cdc754eb81f7c08d3441a3171ad359ffc27    23 seconds ago      Running             kindnet-cni               0                   39d2f79a2e79b       kindnet-dcwlb                                          kube-system
	bd1440375a249       36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691                                      25 seconds ago      Running             kube-proxy                0                   7331c9edf9bc1       kube-proxy-ztxcd                                       kube-system
	c0c2ba25f6933       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                      34 seconds ago      Running             etcd                      0                   19f8cad98748c       etcd-default-k8s-diff-port-759234                      kube-system
	5051ea3dcbb1d       aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78                                      34 seconds ago      Running             kube-scheduler            0                   4ebef653d55b5       kube-scheduler-default-k8s-diff-port-759234            kube-system
	3e6a2fc796c1e       aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c                                      34 seconds ago      Running             kube-apiserver            0                   bc8b29ce1ca45       kube-apiserver-default-k8s-diff-port-759234            kube-system
	625dbd8713d8e       5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942                                      34 seconds ago      Running             kube-controller-manager   0                   28276498765de       kube-controller-manager-default-k8s-diff-port-759234   kube-system
	
	
	==> coredns [897a598ccd124b2a362ce26c6cf066207e5ffc90c580031772c35e06ea4baccf] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = c7556d8fdf49c5e32a9077be8cfb9fc6947bb07e663a10d55b192eb63ad1f2bd9793e8e5f5a36fc9abb1957831eec5c997fd9821790e3990ae9531bf41ecea37
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:38455 - 54242 "HINFO IN 8820493428484686052.1793536270282562788. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.015472007s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-759234
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-759234
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2e96f676eb7e96389e85fe0658a4ede4c4ba6924
	                    minikube.k8s.io/name=default-k8s-diff-port-759234
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_17T20_01_06_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Dec 2025 20:01:02 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-759234
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Dec 2025 20:01:35 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Dec 2025 20:01:23 +0000   Wed, 17 Dec 2025 20:01:01 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Dec 2025 20:01:23 +0000   Wed, 17 Dec 2025 20:01:01 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Dec 2025 20:01:23 +0000   Wed, 17 Dec 2025 20:01:01 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Dec 2025 20:01:23 +0000   Wed, 17 Dec 2025 20:01:23 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    default-k8s-diff-port-759234
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 99cc213c06a11cdf07b2a4d26942818a
	  System UUID:                db8290dd-36ef-4726-9d3e-6ea726055ffb
	  Boot ID:                    832664c8-407a-4bff-a432-3bbc3f20421e
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.3
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         10s
	  kube-system                 coredns-66bc5c9577-lv4jd                                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     26s
	  kube-system                 etcd-default-k8s-diff-port-759234                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         31s
	  kube-system                 kindnet-dcwlb                                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      26s
	  kube-system                 kube-apiserver-default-k8s-diff-port-759234             250m (3%)     0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-759234    200m (2%)     0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 kube-proxy-ztxcd                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         26s
	  kube-system                 kube-scheduler-default-k8s-diff-port-759234             100m (1%)     0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         25s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 25s   kube-proxy       
	  Normal  Starting                 31s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  31s   kubelet          Node default-k8s-diff-port-759234 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    31s   kubelet          Node default-k8s-diff-port-759234 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     31s   kubelet          Node default-k8s-diff-port-759234 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           27s   node-controller  Node default-k8s-diff-port-759234 event: Registered Node default-k8s-diff-port-759234 in Controller
	  Normal  NodeReady                13s   kubelet          Node default-k8s-diff-port-759234 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 02 bf cf fd 8a f3 08 06
	[  +0.000372] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 46 d7 50 f9 50 96 08 06
	[Dec17 19:26] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000011] ll header: 00000000: 12 b8 6e 1b fb 93 de a2 46 23 bd 1e 08 00
	[  +1.015318] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 12 b8 6e 1b fb 93 de a2 46 23 bd 1e 08 00
	[  +1.023837] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 12 b8 6e 1b fb 93 de a2 46 23 bd 1e 08 00
	[  +1.023872] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 12 b8 6e 1b fb 93 de a2 46 23 bd 1e 08 00
	[  +1.023881] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 12 b8 6e 1b fb 93 de a2 46 23 bd 1e 08 00
	[  +1.023899] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 12 b8 6e 1b fb 93 de a2 46 23 bd 1e 08 00
	[  +2.047807] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: 12 b8 6e 1b fb 93 de a2 46 23 bd 1e 08 00
	[  +4.031540] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000022] ll header: 00000000: 12 b8 6e 1b fb 93 de a2 46 23 bd 1e 08 00
	[  +8.319118] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: 12 b8 6e 1b fb 93 de a2 46 23 bd 1e 08 00
	[ +16.382218] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 12 b8 6e 1b fb 93 de a2 46 23 bd 1e 08 00
	[Dec17 19:27] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 12 b8 6e 1b fb 93 de a2 46 23 bd 1e 08 00
	
	
	==> etcd [c0c2ba25f6933ad3574b6dd0c7024fc0cfab8ecc78066f8221887b0ac248daf3] <==
	{"level":"warn","ts":"2025-12-17T20:01:02.061733Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54524","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T20:01:02.070350Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54542","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T20:01:02.078001Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54556","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T20:01:02.085795Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54574","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T20:01:02.092772Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54590","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T20:01:02.100683Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54598","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T20:01:02.108177Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54606","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T20:01:02.117099Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54616","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T20:01:02.126308Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54630","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T20:01:02.143174Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54646","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T20:01:02.152592Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54660","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T20:01:02.160042Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54678","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T20:01:02.168294Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54688","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T20:01:02.175457Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54704","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T20:01:02.182517Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54718","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T20:01:02.189395Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54746","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T20:01:02.196415Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54756","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T20:01:02.203514Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54774","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T20:01:02.210845Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54796","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T20:01:02.235924Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54808","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T20:01:02.247883Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54828","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T20:01:02.255021Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54850","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T20:01:02.306545Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54870","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T20:01:34.573676Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"147.196389ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/deployments/kube-system/metrics-server\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-17T20:01:34.573773Z","caller":"traceutil/trace.go:172","msg":"trace[121499666] range","detail":"{range_begin:/registry/deployments/kube-system/metrics-server; range_end:; response_count:0; response_revision:438; }","duration":"147.3431ms","start":"2025-12-17T20:01:34.426416Z","end":"2025-12-17T20:01:34.573759Z","steps":["trace[121499666] 'range keys from in-memory index tree'  (duration: 147.129545ms)"],"step_count":1}
	
	
	==> kernel <==
	 20:01:36 up  1:44,  0 user,  load average: 3.49, 3.25, 2.35
	Linux default-k8s-diff-port-759234 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [1f010fc668721023b0c1e2523c04f759e8b06ad99f13c039e537031bb4dff2a8] <==
	I1217 20:01:12.616807       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1217 20:01:12.712764       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1217 20:01:12.712949       1 main.go:148] setting mtu 1500 for CNI 
	I1217 20:01:12.712986       1 main.go:178] kindnetd IP family: "ipv4"
	I1217 20:01:12.713011       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-17T20:01:12Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1217 20:01:12.912864       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1217 20:01:12.912907       1 controller.go:381] "Waiting for informer caches to sync"
	I1217 20:01:12.912919       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1217 20:01:12.913152       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1217 20:01:13.413032       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1217 20:01:13.413072       1 metrics.go:72] Registering metrics
	I1217 20:01:13.413176       1 controller.go:711] "Syncing nftables rules"
	I1217 20:01:22.913491       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1217 20:01:22.913543       1 main.go:301] handling current node
	I1217 20:01:32.913356       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1217 20:01:32.913393       1 main.go:301] handling current node
	
	
	==> kube-apiserver [3e6a2fc796c1e35eca29a151ef15854ff6279a283645ec00e6e6af74279ff0c6] <==
	I1217 20:01:02.808464       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1217 20:01:02.808493       1 default_servicecidr_controller.go:166] Creating default ServiceCIDR with CIDRs: [10.96.0.0/12]
	I1217 20:01:02.812743       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1217 20:01:02.813165       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1217 20:01:02.817438       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1217 20:01:02.817710       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1217 20:01:03.005054       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1217 20:01:03.699976       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1217 20:01:03.703744       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1217 20:01:03.703763       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1217 20:01:04.156136       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1217 20:01:04.191928       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1217 20:01:04.304495       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1217 20:01:04.310601       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.94.2]
	I1217 20:01:04.311732       1 controller.go:667] quota admission added evaluator for: endpoints
	I1217 20:01:04.316319       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1217 20:01:04.735289       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1217 20:01:05.471343       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1217 20:01:05.481502       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1217 20:01:05.488973       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1217 20:01:10.437386       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1217 20:01:10.590253       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1217 20:01:10.594051       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1217 20:01:10.787630       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	E1217 20:01:33.810212       1 conn.go:339] Error on socket receive: read tcp 192.168.94.2:8444->192.168.94.1:46524: use of closed network connection
	
	
	==> kube-controller-manager [625dbd8713d8ee52a0e975608577f81b470d4717a0a347d811ea25af386112dd] <==
	I1217 20:01:09.733649       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1217 20:01:09.733815       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1217 20:01:09.733929       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1217 20:01:09.733913       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="default-k8s-diff-port-759234"
	I1217 20:01:09.733998       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1217 20:01:09.734637       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1217 20:01:09.734653       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1217 20:01:09.734706       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1217 20:01:09.734899       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1217 20:01:09.734936       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1217 20:01:09.735114       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1217 20:01:09.735236       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1217 20:01:09.735872       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1217 20:01:09.735907       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1217 20:01:09.736009       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1217 20:01:09.736123       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1217 20:01:09.738943       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1217 20:01:09.738995       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1217 20:01:09.739048       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1217 20:01:09.739055       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1217 20:01:09.739063       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1217 20:01:09.740215       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1217 20:01:09.745457       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="default-k8s-diff-port-759234" podCIDRs=["10.244.0.0/24"]
	I1217 20:01:09.752561       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1217 20:01:24.736237       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [bd1440375a2496d226b9499cdd237ec6a5452fa58958e2dd851a80749d345275] <==
	I1217 20:01:10.879073       1 server_linux.go:53] "Using iptables proxy"
	I1217 20:01:10.967723       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1217 20:01:11.067910       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1217 20:01:11.067975       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.94.2"]
	E1217 20:01:11.068093       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1217 20:01:11.106914       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1217 20:01:11.107058       1 server_linux.go:132] "Using iptables Proxier"
	I1217 20:01:11.119164       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1217 20:01:11.120010       1 server.go:527] "Version info" version="v1.34.3"
	I1217 20:01:11.120043       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1217 20:01:11.122761       1 config.go:200] "Starting service config controller"
	I1217 20:01:11.122781       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1217 20:01:11.122810       1 config.go:106] "Starting endpoint slice config controller"
	I1217 20:01:11.122815       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1217 20:01:11.123059       1 config.go:403] "Starting serviceCIDR config controller"
	I1217 20:01:11.123135       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1217 20:01:11.133303       1 config.go:309] "Starting node config controller"
	I1217 20:01:11.134340       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1217 20:01:11.134359       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1217 20:01:11.223555       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1217 20:01:11.223690       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1217 20:01:11.223709       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [5051ea3dcbb1d673a825950fa8a7061323400b740c55f9eeafd1d3bbb1df07cd] <==
	E1217 20:01:02.766928       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1217 20:01:02.767580       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1217 20:01:02.767715       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1217 20:01:02.767758       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1217 20:01:02.767751       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1217 20:01:02.767849       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1217 20:01:02.767864       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1217 20:01:02.767884       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1217 20:01:02.767932       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1217 20:01:02.767979       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1217 20:01:02.767981       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1217 20:01:02.768002       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1217 20:01:02.768043       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1217 20:01:02.768051       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1217 20:01:02.768124       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1217 20:01:02.768145       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1217 20:01:02.768159       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1217 20:01:03.664408       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1217 20:01:03.672011       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1217 20:01:03.675203       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1217 20:01:03.731173       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1217 20:01:03.826194       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1217 20:01:03.853182       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1217 20:01:04.136980       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I1217 20:01:06.365407       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 17 20:01:06 default-k8s-diff-port-759234 kubelet[1333]: I1217 20:01:06.389582    1333 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-default-k8s-diff-port-759234" podStartSLOduration=1.389558015 podStartE2EDuration="1.389558015s" podCreationTimestamp="2025-12-17 20:01:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-17 20:01:06.378702366 +0000 UTC m=+1.145481516" watchObservedRunningTime="2025-12-17 20:01:06.389558015 +0000 UTC m=+1.156337160"
	Dec 17 20:01:06 default-k8s-diff-port-759234 kubelet[1333]: I1217 20:01:06.399096    1333 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-default-k8s-diff-port-759234" podStartSLOduration=1.399057273 podStartE2EDuration="1.399057273s" podCreationTimestamp="2025-12-17 20:01:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-17 20:01:06.389730826 +0000 UTC m=+1.156509972" watchObservedRunningTime="2025-12-17 20:01:06.399057273 +0000 UTC m=+1.165836417"
	Dec 17 20:01:06 default-k8s-diff-port-759234 kubelet[1333]: I1217 20:01:06.399287    1333 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-default-k8s-diff-port-759234" podStartSLOduration=1.399273787 podStartE2EDuration="1.399273787s" podCreationTimestamp="2025-12-17 20:01:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-17 20:01:06.399154559 +0000 UTC m=+1.165933705" watchObservedRunningTime="2025-12-17 20:01:06.399273787 +0000 UTC m=+1.166052935"
	Dec 17 20:01:06 default-k8s-diff-port-759234 kubelet[1333]: I1217 20:01:06.408799    1333 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-default-k8s-diff-port-759234" podStartSLOduration=1.408779953 podStartE2EDuration="1.408779953s" podCreationTimestamp="2025-12-17 20:01:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-17 20:01:06.408751603 +0000 UTC m=+1.175530748" watchObservedRunningTime="2025-12-17 20:01:06.408779953 +0000 UTC m=+1.175559099"
	Dec 17 20:01:09 default-k8s-diff-port-759234 kubelet[1333]: I1217 20:01:09.769224    1333 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Dec 17 20:01:09 default-k8s-diff-port-759234 kubelet[1333]: I1217 20:01:09.769960    1333 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Dec 17 20:01:10 default-k8s-diff-port-759234 kubelet[1333]: I1217 20:01:10.545473    1333 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/a079c536-3edc-4c6e-b4a0-2cd7c0aa432f-kube-proxy\") pod \"kube-proxy-ztxcd\" (UID: \"a079c536-3edc-4c6e-b4a0-2cd7c0aa432f\") " pod="kube-system/kube-proxy-ztxcd"
	Dec 17 20:01:10 default-k8s-diff-port-759234 kubelet[1333]: I1217 20:01:10.545517    1333 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a079c536-3edc-4c6e-b4a0-2cd7c0aa432f-lib-modules\") pod \"kube-proxy-ztxcd\" (UID: \"a079c536-3edc-4c6e-b4a0-2cd7c0aa432f\") " pod="kube-system/kube-proxy-ztxcd"
	Dec 17 20:01:10 default-k8s-diff-port-759234 kubelet[1333]: I1217 20:01:10.545584    1333 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/36f8201b-9363-43f5-8a85-e9291ee817a3-xtables-lock\") pod \"kindnet-dcwlb\" (UID: \"36f8201b-9363-43f5-8a85-e9291ee817a3\") " pod="kube-system/kindnet-dcwlb"
	Dec 17 20:01:10 default-k8s-diff-port-759234 kubelet[1333]: I1217 20:01:10.545667    1333 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a079c536-3edc-4c6e-b4a0-2cd7c0aa432f-xtables-lock\") pod \"kube-proxy-ztxcd\" (UID: \"a079c536-3edc-4c6e-b4a0-2cd7c0aa432f\") " pod="kube-system/kube-proxy-ztxcd"
	Dec 17 20:01:10 default-k8s-diff-port-759234 kubelet[1333]: I1217 20:01:10.545709    1333 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zvxcb\" (UniqueName: \"kubernetes.io/projected/a079c536-3edc-4c6e-b4a0-2cd7c0aa432f-kube-api-access-zvxcb\") pod \"kube-proxy-ztxcd\" (UID: \"a079c536-3edc-4c6e-b4a0-2cd7c0aa432f\") " pod="kube-system/kube-proxy-ztxcd"
	Dec 17 20:01:10 default-k8s-diff-port-759234 kubelet[1333]: I1217 20:01:10.545726    1333 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/36f8201b-9363-43f5-8a85-e9291ee817a3-lib-modules\") pod \"kindnet-dcwlb\" (UID: \"36f8201b-9363-43f5-8a85-e9291ee817a3\") " pod="kube-system/kindnet-dcwlb"
	Dec 17 20:01:10 default-k8s-diff-port-759234 kubelet[1333]: I1217 20:01:10.545741    1333 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/36f8201b-9363-43f5-8a85-e9291ee817a3-cni-cfg\") pod \"kindnet-dcwlb\" (UID: \"36f8201b-9363-43f5-8a85-e9291ee817a3\") " pod="kube-system/kindnet-dcwlb"
	Dec 17 20:01:10 default-k8s-diff-port-759234 kubelet[1333]: I1217 20:01:10.545758    1333 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6xd6s\" (UniqueName: \"kubernetes.io/projected/36f8201b-9363-43f5-8a85-e9291ee817a3-kube-api-access-6xd6s\") pod \"kindnet-dcwlb\" (UID: \"36f8201b-9363-43f5-8a85-e9291ee817a3\") " pod="kube-system/kindnet-dcwlb"
	Dec 17 20:01:11 default-k8s-diff-port-759234 kubelet[1333]: I1217 20:01:11.384890    1333 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-ztxcd" podStartSLOduration=1.384865527 podStartE2EDuration="1.384865527s" podCreationTimestamp="2025-12-17 20:01:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-17 20:01:11.384010888 +0000 UTC m=+6.150790034" watchObservedRunningTime="2025-12-17 20:01:11.384865527 +0000 UTC m=+6.151644674"
	Dec 17 20:01:13 default-k8s-diff-port-759234 kubelet[1333]: I1217 20:01:13.381821    1333 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-dcwlb" podStartSLOduration=1.7482675319999998 podStartE2EDuration="3.381796958s" podCreationTimestamp="2025-12-17 20:01:10 +0000 UTC" firstStartedPulling="2025-12-17 20:01:10.776235247 +0000 UTC m=+5.543014385" lastFinishedPulling="2025-12-17 20:01:12.409764679 +0000 UTC m=+7.176543811" observedRunningTime="2025-12-17 20:01:13.381639897 +0000 UTC m=+8.148419043" watchObservedRunningTime="2025-12-17 20:01:13.381796958 +0000 UTC m=+8.148576104"
	Dec 17 20:01:23 default-k8s-diff-port-759234 kubelet[1333]: I1217 20:01:23.065334    1333 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Dec 17 20:01:23 default-k8s-diff-port-759234 kubelet[1333]: I1217 20:01:23.144904    1333 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2z7hm\" (UniqueName: \"kubernetes.io/projected/a17149a4-0ee9-41fb-96d8-42931da4569f-kube-api-access-2z7hm\") pod \"coredns-66bc5c9577-lv4jd\" (UID: \"a17149a4-0ee9-41fb-96d8-42931da4569f\") " pod="kube-system/coredns-66bc5c9577-lv4jd"
	Dec 17 20:01:23 default-k8s-diff-port-759234 kubelet[1333]: I1217 20:01:23.144970    1333 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2ztcl\" (UniqueName: \"kubernetes.io/projected/885e7cc2-77a0-4ba5-be19-0f37c71945f8-kube-api-access-2ztcl\") pod \"storage-provisioner\" (UID: \"885e7cc2-77a0-4ba5-be19-0f37c71945f8\") " pod="kube-system/storage-provisioner"
	Dec 17 20:01:23 default-k8s-diff-port-759234 kubelet[1333]: I1217 20:01:23.144999    1333 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a17149a4-0ee9-41fb-96d8-42931da4569f-config-volume\") pod \"coredns-66bc5c9577-lv4jd\" (UID: \"a17149a4-0ee9-41fb-96d8-42931da4569f\") " pod="kube-system/coredns-66bc5c9577-lv4jd"
	Dec 17 20:01:23 default-k8s-diff-port-759234 kubelet[1333]: I1217 20:01:23.145063    1333 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/885e7cc2-77a0-4ba5-be19-0f37c71945f8-tmp\") pod \"storage-provisioner\" (UID: \"885e7cc2-77a0-4ba5-be19-0f37c71945f8\") " pod="kube-system/storage-provisioner"
	Dec 17 20:01:24 default-k8s-diff-port-759234 kubelet[1333]: I1217 20:01:24.411742    1333 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-lv4jd" podStartSLOduration=14.411716721 podStartE2EDuration="14.411716721s" podCreationTimestamp="2025-12-17 20:01:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-17 20:01:24.411546406 +0000 UTC m=+19.178325571" watchObservedRunningTime="2025-12-17 20:01:24.411716721 +0000 UTC m=+19.178495867"
	Dec 17 20:01:24 default-k8s-diff-port-759234 kubelet[1333]: I1217 20:01:24.422674    1333 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=13.42264649 podStartE2EDuration="13.42264649s" podCreationTimestamp="2025-12-17 20:01:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-17 20:01:24.422343415 +0000 UTC m=+19.189122561" watchObservedRunningTime="2025-12-17 20:01:24.42264649 +0000 UTC m=+19.189425637"
	Dec 17 20:01:26 default-k8s-diff-port-759234 kubelet[1333]: I1217 20:01:26.768320    1333 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r89b7\" (UniqueName: \"kubernetes.io/projected/3f23f224-9b23-48f4-a957-ebc839304940-kube-api-access-r89b7\") pod \"busybox\" (UID: \"3f23f224-9b23-48f4-a957-ebc839304940\") " pod="default/busybox"
	Dec 17 20:01:33 default-k8s-diff-port-759234 kubelet[1333]: E1217 20:01:33.810234    1333 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:60594->127.0.0.1:33239: write tcp 127.0.0.1:60594->127.0.0.1:33239: write: broken pipe
	
	
	==> storage-provisioner [49f3e1b5f6b257998d3884926bf9477591cce14c2ae20998cf4a4485d0ed73f2] <==
	I1217 20:01:23.491293       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1217 20:01:23.499901       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1217 20:01:23.500069       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1217 20:01:23.502251       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 20:01:23.508873       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1217 20:01:23.509101       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1217 20:01:23.509302       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"ef3dee07-d1ce-418e-a6ba-4a2d4546a253", APIVersion:"v1", ResourceVersion:"410", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-759234_9d091005-cd72-41ea-9828-370db5f42fdb became leader
	I1217 20:01:23.509302       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-759234_9d091005-cd72-41ea-9828-370db5f42fdb!
	W1217 20:01:23.512202       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 20:01:23.516536       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1217 20:01:23.610296       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-759234_9d091005-cd72-41ea-9828-370db5f42fdb!
	W1217 20:01:25.520113       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 20:01:25.527781       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 20:01:27.532141       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 20:01:27.539970       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 20:01:29.543795       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 20:01:29.550399       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 20:01:31.553838       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 20:01:31.560019       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 20:01:33.582001       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 20:01:33.653521       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 20:01:35.657909       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 20:01:35.667240       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-759234 -n default-k8s-diff-port-759234
helpers_test.go:270: (dbg) Run:  kubectl --context default-k8s-diff-port-759234 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (3.25s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (2.57s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-420762 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-420762 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (327.983292ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T20:01:55Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-420762 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect newest-cni-420762
helpers_test.go:244: (dbg) docker inspect newest-cni-420762:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "f638a198c1fac512e27e9dc5b5e8951d602e997655ed0515839658576a7bc882",
	        "Created": "2025-12-17T20:01:35.486713573Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 642653,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-17T20:01:35.544967151Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:e3abeb065413b7566dd42e98e204ab3ad174790743f1f5cd427036c11b49d7f1",
	        "ResolvConfPath": "/var/lib/docker/containers/f638a198c1fac512e27e9dc5b5e8951d602e997655ed0515839658576a7bc882/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/f638a198c1fac512e27e9dc5b5e8951d602e997655ed0515839658576a7bc882/hostname",
	        "HostsPath": "/var/lib/docker/containers/f638a198c1fac512e27e9dc5b5e8951d602e997655ed0515839658576a7bc882/hosts",
	        "LogPath": "/var/lib/docker/containers/f638a198c1fac512e27e9dc5b5e8951d602e997655ed0515839658576a7bc882/f638a198c1fac512e27e9dc5b5e8951d602e997655ed0515839658576a7bc882-json.log",
	        "Name": "/newest-cni-420762",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-420762:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "newest-cni-420762",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "f638a198c1fac512e27e9dc5b5e8951d602e997655ed0515839658576a7bc882",
	                "LowerDir": "/var/lib/docker/overlay2/1752dfd752ba541c00ea437bb3a181f09772c91428c90506c33b812d67f94809-init/diff:/var/lib/docker/overlay2/29727d664a8119dcd8d22d923cfdfa7d86f99088879bf2a113d907b51116eb38/diff",
	                "MergedDir": "/var/lib/docker/overlay2/1752dfd752ba541c00ea437bb3a181f09772c91428c90506c33b812d67f94809/merged",
	                "UpperDir": "/var/lib/docker/overlay2/1752dfd752ba541c00ea437bb3a181f09772c91428c90506c33b812d67f94809/diff",
	                "WorkDir": "/var/lib/docker/overlay2/1752dfd752ba541c00ea437bb3a181f09772c91428c90506c33b812d67f94809/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-420762",
	                "Source": "/var/lib/docker/volumes/newest-cni-420762/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-420762",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-420762",
	                "name.minikube.sigs.k8s.io": "newest-cni-420762",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "55bae9462291927c18f7483f71230b296c61b73bf0a82684bf6efe4ca34845cd",
	            "SandboxKey": "/var/run/docker/netns/55bae9462291",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33458"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33459"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33462"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33460"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33461"
	                    }
	                ]
	            },
	            "Networks": {
	                "newest-cni-420762": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "c599555d4217815d05b632e5621ed20805e2fb5e529f70229a8fb07f9886d72c",
	                    "EndpointID": "5565528c7a2f1d44d3d2a115fa52c6e1939897b194d80873e2f69e162ddc36dd",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "MacAddress": "1a:d7:35:48:d7:fe",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-420762",
	                        "f638a198c1fa"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-420762 -n newest-cni-420762
helpers_test.go:253: <<< TestStartStop/group/newest-cni/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-420762 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p newest-cni-420762 logs -n 25: (1.201897107s)
helpers_test.go:261: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────
────────────┐
	│ COMMAND │                                                                                                                        ARGS                                                                                                                        │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────
────────────┤
	│ start   │ -p no-preload-832842 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1                                                                                       │ no-preload-832842            │ jenkins │ v1.37.0 │ 17 Dec 25 19:59 UTC │ 17 Dec 25 19:59 UTC │
	│ addons  │ enable metrics-server -p no-preload-832842 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                            │ no-preload-832842            │ jenkins │ v1.37.0 │ 17 Dec 25 20:00 UTC │                     │
	│ stop    │ -p no-preload-832842 --alsologtostderr -v=3                                                                                                                                                                                                        │ no-preload-832842            │ jenkins │ v1.37.0 │ 17 Dec 25 20:00 UTC │ 17 Dec 25 20:00 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-894575 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ old-k8s-version-894575       │ jenkins │ v1.37.0 │ 17 Dec 25 20:00 UTC │                     │
	│ stop    │ -p old-k8s-version-894575 --alsologtostderr -v=3                                                                                                                                                                                                   │ old-k8s-version-894575       │ jenkins │ v1.37.0 │ 17 Dec 25 20:00 UTC │ 17 Dec 25 20:00 UTC │
	│ addons  │ enable dashboard -p no-preload-832842 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                       │ no-preload-832842            │ jenkins │ v1.37.0 │ 17 Dec 25 20:00 UTC │ 17 Dec 25 20:00 UTC │
	│ start   │ -p no-preload-832842 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1                                                                                       │ no-preload-832842            │ jenkins │ v1.37.0 │ 17 Dec 25 20:00 UTC │ 17 Dec 25 20:01 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-894575 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ old-k8s-version-894575       │ jenkins │ v1.37.0 │ 17 Dec 25 20:00 UTC │ 17 Dec 25 20:00 UTC │
	│ start   │ -p old-k8s-version-894575 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0      │ old-k8s-version-894575       │ jenkins │ v1.37.0 │ 17 Dec 25 20:00 UTC │ 17 Dec 25 20:01 UTC │
	│ start   │ -p cert-expiration-059470 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                          │ cert-expiration-059470       │ jenkins │ v1.37.0 │ 17 Dec 25 20:00 UTC │ 17 Dec 25 20:00 UTC │
	│ delete  │ -p cert-expiration-059470                                                                                                                                                                                                                          │ cert-expiration-059470       │ jenkins │ v1.37.0 │ 17 Dec 25 20:00 UTC │ 17 Dec 25 20:00 UTC │
	│ start   │ -p default-k8s-diff-port-759234 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3                                                                           │ default-k8s-diff-port-759234 │ jenkins │ v1.37.0 │ 17 Dec 25 20:00 UTC │ 17 Dec 25 20:01 UTC │
	│ image   │ no-preload-832842 image list --format=json                                                                                                                                                                                                         │ no-preload-832842            │ jenkins │ v1.37.0 │ 17 Dec 25 20:01 UTC │ 17 Dec 25 20:01 UTC │
	│ pause   │ -p no-preload-832842 --alsologtostderr -v=1                                                                                                                                                                                                        │ no-preload-832842            │ jenkins │ v1.37.0 │ 17 Dec 25 20:01 UTC │                     │
	│ image   │ old-k8s-version-894575 image list --format=json                                                                                                                                                                                                    │ old-k8s-version-894575       │ jenkins │ v1.37.0 │ 17 Dec 25 20:01 UTC │ 17 Dec 25 20:01 UTC │
	│ pause   │ -p old-k8s-version-894575 --alsologtostderr -v=1                                                                                                                                                                                                   │ old-k8s-version-894575       │ jenkins │ v1.37.0 │ 17 Dec 25 20:01 UTC │                     │
	│ delete  │ -p no-preload-832842                                                                                                                                                                                                                               │ no-preload-832842            │ jenkins │ v1.37.0 │ 17 Dec 25 20:01 UTC │ 17 Dec 25 20:01 UTC │
	│ delete  │ -p old-k8s-version-894575                                                                                                                                                                                                                          │ old-k8s-version-894575       │ jenkins │ v1.37.0 │ 17 Dec 25 20:01 UTC │ 17 Dec 25 20:01 UTC │
	│ delete  │ -p no-preload-832842                                                                                                                                                                                                                               │ no-preload-832842            │ jenkins │ v1.37.0 │ 17 Dec 25 20:01 UTC │ 17 Dec 25 20:01 UTC │
	│ start   │ -p newest-cni-420762 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1 │ newest-cni-420762            │ jenkins │ v1.37.0 │ 17 Dec 25 20:01 UTC │ 17 Dec 25 20:01 UTC │
	│ delete  │ -p old-k8s-version-894575                                                                                                                                                                                                                          │ old-k8s-version-894575       │ jenkins │ v1.37.0 │ 17 Dec 25 20:01 UTC │ 17 Dec 25 20:01 UTC │
	│ start   │ -p embed-certs-147021 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3                                                                                             │ embed-certs-147021           │ jenkins │ v1.37.0 │ 17 Dec 25 20:01 UTC │                     │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-759234 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                 │ default-k8s-diff-port-759234 │ jenkins │ v1.37.0 │ 17 Dec 25 20:01 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-759234 --alsologtostderr -v=3                                                                                                                                                                                             │ default-k8s-diff-port-759234 │ jenkins │ v1.37.0 │ 17 Dec 25 20:01 UTC │ 17 Dec 25 20:01 UTC │
	│ addons  │ enable metrics-server -p newest-cni-420762 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                            │ newest-cni-420762            │ jenkins │ v1.37.0 │ 17 Dec 25 20:01 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────
────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/17 20:01:33
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1217 20:01:33.091895  641791 out.go:360] Setting OutFile to fd 1 ...
	I1217 20:01:33.092050  641791 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 20:01:33.092062  641791 out.go:374] Setting ErrFile to fd 2...
	I1217 20:01:33.092069  641791 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 20:01:33.092310  641791 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22186-372245/.minikube/bin
	I1217 20:01:33.092850  641791 out.go:368] Setting JSON to false
	I1217 20:01:33.094021  641791 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":6244,"bootTime":1765995449,"procs":286,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1217 20:01:33.094099  641791 start.go:143] virtualization: kvm guest
	I1217 20:01:33.096247  641791 out.go:179] * [embed-certs-147021] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1217 20:01:33.097457  641791 out.go:179]   - MINIKUBE_LOCATION=22186
	I1217 20:01:33.097511  641791 notify.go:221] Checking for updates...
	I1217 20:01:33.100429  641791 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 20:01:33.102070  641791 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22186-372245/kubeconfig
	I1217 20:01:33.103213  641791 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22186-372245/.minikube
	I1217 20:01:33.104315  641791 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1217 20:01:33.105413  641791 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 20:01:33.107203  641791 config.go:182] Loaded profile config "default-k8s-diff-port-759234": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 20:01:33.107374  641791 config.go:182] Loaded profile config "kubernetes-upgrade-322567": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1217 20:01:33.107519  641791 config.go:182] Loaded profile config "newest-cni-420762": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1217 20:01:33.107648  641791 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 20:01:33.133692  641791 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1217 20:01:33.133791  641791 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 20:01:33.197834  641791 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:62 OomKillDisable:false NGoroutines:81 SystemTime:2025-12-17 20:01:33.187070252 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1217 20:01:33.198041  641791 docker.go:319] overlay module found
	I1217 20:01:33.200929  641791 out.go:179] * Using the docker driver based on user configuration
	I1217 20:01:33.202110  641791 start.go:309] selected driver: docker
	I1217 20:01:33.202127  641791 start.go:927] validating driver "docker" against <nil>
	I1217 20:01:33.202147  641791 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 20:01:33.202718  641791 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 20:01:33.265204  641791 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:62 OomKillDisable:false NGoroutines:81 SystemTime:2025-12-17 20:01:33.255595665 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1217 20:01:33.265495  641791 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1217 20:01:33.265886  641791 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1217 20:01:33.268029  641791 out.go:179] * Using Docker driver with root privileges
	I1217 20:01:33.269314  641791 cni.go:84] Creating CNI manager for ""
	I1217 20:01:33.269379  641791 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1217 20:01:33.269393  641791 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1217 20:01:33.269488  641791 start.go:353] cluster config:
	{Name:embed-certs-147021 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:embed-certs-147021 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPI
D:0 GPUs: AutoPauseInterval:1m0s}
	I1217 20:01:33.274207  641791 out.go:179] * Starting "embed-certs-147021" primary control-plane node in "embed-certs-147021" cluster
	I1217 20:01:33.275606  641791 cache.go:134] Beginning downloading kic base image for docker with crio
	I1217 20:01:33.276920  641791 out.go:179] * Pulling base image v0.0.48-1765966054-22186 ...
	I1217 20:01:33.278131  641791 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1217 20:01:33.278171  641791 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22186-372245/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4
	I1217 20:01:33.278190  641791 cache.go:65] Caching tarball of preloaded images
	I1217 20:01:33.278215  641791 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 in local docker daemon
	I1217 20:01:33.278334  641791 preload.go:238] Found /home/jenkins/minikube-integration/22186-372245/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1217 20:01:33.278352  641791 cache.go:68] Finished verifying existence of preloaded tar for v1.34.3 on crio
	I1217 20:01:33.278492  641791 profile.go:143] Saving config to /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/embed-certs-147021/config.json ...
	I1217 20:01:33.278524  641791 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/embed-certs-147021/config.json: {Name:mk0de7355995fcfbd87b6c9c1a955d58d25dce4f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 20:01:33.300354  641791 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 in local docker daemon, skipping pull
	I1217 20:01:33.300385  641791 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 exists in daemon, skipping load
	I1217 20:01:33.300408  641791 cache.go:243] Successfully downloaded all kic artifacts
	I1217 20:01:33.300451  641791 start.go:360] acquireMachinesLock for embed-certs-147021: {Name:mkc6328ab9d874d1f1fffe133279d94e48b1c6e9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 20:01:33.300584  641791 start.go:364] duration metric: took 106.56µs to acquireMachinesLock for "embed-certs-147021"
	I1217 20:01:33.300620  641791 start.go:93] Provisioning new machine with config: &{Name:embed-certs-147021 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:embed-certs-147021 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1217 20:01:33.300736  641791 start.go:125] createHost starting for "" (driver="docker")
	I1217 20:01:30.882984  640931 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1217 20:01:30.883253  640931 start.go:159] libmachine.API.Create for "newest-cni-420762" (driver="docker")
	I1217 20:01:30.883287  640931 client.go:173] LocalClient.Create starting
	I1217 20:01:30.883351  640931 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22186-372245/.minikube/certs/ca.pem
	I1217 20:01:30.883397  640931 main.go:143] libmachine: Decoding PEM data...
	I1217 20:01:30.883428  640931 main.go:143] libmachine: Parsing certificate...
	I1217 20:01:30.883500  640931 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22186-372245/.minikube/certs/cert.pem
	I1217 20:01:30.883523  640931 main.go:143] libmachine: Decoding PEM data...
	I1217 20:01:30.883535  640931 main.go:143] libmachine: Parsing certificate...
	I1217 20:01:30.883896  640931 cli_runner.go:164] Run: docker network inspect newest-cni-420762 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1217 20:01:30.902825  640931 cli_runner.go:211] docker network inspect newest-cni-420762 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1217 20:01:30.902912  640931 network_create.go:284] running [docker network inspect newest-cni-420762] to gather additional debugging logs...
	I1217 20:01:30.902933  640931 cli_runner.go:164] Run: docker network inspect newest-cni-420762
	W1217 20:01:30.919819  640931 cli_runner.go:211] docker network inspect newest-cni-420762 returned with exit code 1
	I1217 20:01:30.919902  640931 network_create.go:287] error running [docker network inspect newest-cni-420762]: docker network inspect newest-cni-420762: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-420762 not found
	I1217 20:01:30.919929  640931 network_create.go:289] output of [docker network inspect newest-cni-420762]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-420762 not found
	
	** /stderr **
	I1217 20:01:30.920061  640931 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1217 20:01:30.938710  640931 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-f64340259533 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:f6:0a:32:70:0d:35} reservation:<nil>}
	I1217 20:01:30.939560  640931 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-67abe6566c60 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:42:82:43:08:7c:e3} reservation:<nil>}
	I1217 20:01:30.940030  640931 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-f76d03f2ebfd IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:8e:bb:9b:fb:af:46} reservation:<nil>}
	I1217 20:01:30.940717  640931 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-4c731e2a052d IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:4e:e6:a7:52:2c:69} reservation:<nil>}
	I1217 20:01:30.941573  640931 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-f0ce1019d985 IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:26:5a:f7:51:9a:55} reservation:<nil>}
	I1217 20:01:30.942387  640931 network.go:211] skipping subnet 192.168.94.0/24 that is taken: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName:br-034e5df717c0 IfaceIPv4:192.168.94.1 IfaceMTU:1500 IfaceMAC:ba:b3:70:1b:24:dd} reservation:<nil>}
	I1217 20:01:30.943207  640931 network.go:206] using free private subnet 192.168.103.0/24: &{IP:192.168.103.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.103.0/24 Gateway:192.168.103.1 ClientMin:192.168.103.2 ClientMax:192.168.103.254 Broadcast:192.168.103.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001f7f690}
	I1217 20:01:30.943234  640931 network_create.go:124] attempt to create docker network newest-cni-420762 192.168.103.0/24 with gateway 192.168.103.1 and MTU of 1500 ...
	I1217 20:01:30.943300  640931 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.103.0/24 --gateway=192.168.103.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-420762 newest-cni-420762
	I1217 20:01:30.992609  640931 network_create.go:108] docker network newest-cni-420762 192.168.103.0/24 created
	I1217 20:01:30.992645  640931 kic.go:121] calculated static IP "192.168.103.2" for the "newest-cni-420762" container
	I1217 20:01:30.992715  640931 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1217 20:01:31.011871  640931 cli_runner.go:164] Run: docker volume create newest-cni-420762 --label name.minikube.sigs.k8s.io=newest-cni-420762 --label created_by.minikube.sigs.k8s.io=true
	I1217 20:01:31.030552  640931 oci.go:103] Successfully created a docker volume newest-cni-420762
	I1217 20:01:31.030642  640931 cli_runner.go:164] Run: docker run --rm --name newest-cni-420762-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-420762 --entrypoint /usr/bin/test -v newest-cni-420762:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 -d /var/lib
	I1217 20:01:31.455024  640931 oci.go:107] Successfully prepared a docker volume newest-cni-420762
	I1217 20:01:31.455102  640931 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime crio
	I1217 20:01:31.455116  640931 kic.go:194] Starting extracting preloaded images to volume ...
	I1217 20:01:31.455191  640931 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22186-372245/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-420762:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 -I lz4 -xf /preloaded.tar -C /extractDir
	I1217 20:01:35.390226  640931 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22186-372245/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-420762:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 -I lz4 -xf /preloaded.tar -C /extractDir: (3.934902382s)
	I1217 20:01:35.390272  640931 kic.go:203] duration metric: took 3.935152489s to extract preloaded images to volume ...
	W1217 20:01:35.390424  640931 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1217 20:01:35.390463  640931 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1217 20:01:35.390544  640931 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1217 20:01:35.467125  640931 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-420762 --name newest-cni-420762 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-420762 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-420762 --network newest-cni-420762 --ip 192.168.103.2 --volume newest-cni-420762:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0
	I1217 20:01:32.292271  596882 logs.go:123] Gathering logs for kube-scheduler [26afbca819064c614a7c269e4fbe3f73beb12920c9989c7a9adca8a87b8aee29] ...
	I1217 20:01:32.292304  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 26afbca819064c614a7c269e4fbe3f73beb12920c9989c7a9adca8a87b8aee29"
	I1217 20:01:32.324646  596882 logs.go:123] Gathering logs for kube-controller-manager [711081a1b65cc9754b1a9b8fd19fce7769b6a8e65b097e062aa1703f24e1a476] ...
	I1217 20:01:32.324684  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 711081a1b65cc9754b1a9b8fd19fce7769b6a8e65b097e062aa1703f24e1a476"
	I1217 20:01:32.356197  596882 logs.go:123] Gathering logs for CRI-O ...
	I1217 20:01:32.356239  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 20:01:32.418245  596882 logs.go:123] Gathering logs for container status ...
	I1217 20:01:32.418282  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 20:01:32.456088  596882 logs.go:123] Gathering logs for kubelet ...
	I1217 20:01:32.456134  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 20:01:35.060329  596882 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1217 20:01:35.060802  596882 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 20:01:35.060871  596882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 20:01:35.060934  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 20:01:35.089657  596882 cri.go:89] found id: "dfcf129a23a9b4b8338549662d03dc9674e70494089b9acbd56ee28dd0e59a2e"
	I1217 20:01:35.089681  596882 cri.go:89] found id: ""
	I1217 20:01:35.089691  596882 logs.go:282] 1 containers: [dfcf129a23a9b4b8338549662d03dc9674e70494089b9acbd56ee28dd0e59a2e]
	I1217 20:01:35.089756  596882 ssh_runner.go:195] Run: which crictl
	I1217 20:01:35.094059  596882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 20:01:35.094152  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 20:01:35.121406  596882 cri.go:89] found id: ""
	I1217 20:01:35.121438  596882 logs.go:282] 0 containers: []
	W1217 20:01:35.121451  596882 logs.go:284] No container was found matching "etcd"
	I1217 20:01:35.121460  596882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 20:01:35.121522  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 20:01:35.149656  596882 cri.go:89] found id: ""
	I1217 20:01:35.149687  596882 logs.go:282] 0 containers: []
	W1217 20:01:35.149702  596882 logs.go:284] No container was found matching "coredns"
	I1217 20:01:35.149710  596882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 20:01:35.149774  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 20:01:35.179045  596882 cri.go:89] found id: "26afbca819064c614a7c269e4fbe3f73beb12920c9989c7a9adca8a87b8aee29"
	I1217 20:01:35.179067  596882 cri.go:89] found id: ""
	I1217 20:01:35.179089  596882 logs.go:282] 1 containers: [26afbca819064c614a7c269e4fbe3f73beb12920c9989c7a9adca8a87b8aee29]
	I1217 20:01:35.179151  596882 ssh_runner.go:195] Run: which crictl
	I1217 20:01:35.183331  596882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 20:01:35.183409  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 20:01:35.211822  596882 cri.go:89] found id: ""
	I1217 20:01:35.211853  596882 logs.go:282] 0 containers: []
	W1217 20:01:35.211866  596882 logs.go:284] No container was found matching "kube-proxy"
	I1217 20:01:35.211873  596882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 20:01:35.211945  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 20:01:35.240236  596882 cri.go:89] found id: "711081a1b65cc9754b1a9b8fd19fce7769b6a8e65b097e062aa1703f24e1a476"
	I1217 20:01:35.240262  596882 cri.go:89] found id: ""
	I1217 20:01:35.240271  596882 logs.go:282] 1 containers: [711081a1b65cc9754b1a9b8fd19fce7769b6a8e65b097e062aa1703f24e1a476]
	I1217 20:01:35.240324  596882 ssh_runner.go:195] Run: which crictl
	I1217 20:01:35.244440  596882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 20:01:35.244510  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 20:01:35.272165  596882 cri.go:89] found id: ""
	I1217 20:01:35.272197  596882 logs.go:282] 0 containers: []
	W1217 20:01:35.272210  596882 logs.go:284] No container was found matching "kindnet"
	I1217 20:01:35.272218  596882 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1217 20:01:35.272283  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 20:01:35.300126  596882 cri.go:89] found id: ""
	I1217 20:01:35.300151  596882 logs.go:282] 0 containers: []
	W1217 20:01:35.300159  596882 logs.go:284] No container was found matching "storage-provisioner"
	I1217 20:01:35.300170  596882 logs.go:123] Gathering logs for CRI-O ...
	I1217 20:01:35.300185  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 20:01:35.356442  596882 logs.go:123] Gathering logs for container status ...
	I1217 20:01:35.356478  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 20:01:35.407674  596882 logs.go:123] Gathering logs for kubelet ...
	I1217 20:01:35.407714  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 20:01:35.531652  596882 logs.go:123] Gathering logs for dmesg ...
	I1217 20:01:35.531701  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 20:01:35.553150  596882 logs.go:123] Gathering logs for describe nodes ...
	I1217 20:01:35.553187  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 20:01:35.629502  596882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 20:01:35.629530  596882 logs.go:123] Gathering logs for kube-apiserver [dfcf129a23a9b4b8338549662d03dc9674e70494089b9acbd56ee28dd0e59a2e] ...
	I1217 20:01:35.629550  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 dfcf129a23a9b4b8338549662d03dc9674e70494089b9acbd56ee28dd0e59a2e"
	I1217 20:01:35.685118  596882 logs.go:123] Gathering logs for kube-scheduler [26afbca819064c614a7c269e4fbe3f73beb12920c9989c7a9adca8a87b8aee29] ...
	I1217 20:01:35.685162  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 26afbca819064c614a7c269e4fbe3f73beb12920c9989c7a9adca8a87b8aee29"
	I1217 20:01:35.721809  596882 logs.go:123] Gathering logs for kube-controller-manager [711081a1b65cc9754b1a9b8fd19fce7769b6a8e65b097e062aa1703f24e1a476] ...
	I1217 20:01:35.721839  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 711081a1b65cc9754b1a9b8fd19fce7769b6a8e65b097e062aa1703f24e1a476"
	I1217 20:01:33.303597  641791 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1217 20:01:33.303833  641791 start.go:159] libmachine.API.Create for "embed-certs-147021" (driver="docker")
	I1217 20:01:33.303864  641791 client.go:173] LocalClient.Create starting
	I1217 20:01:33.303948  641791 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22186-372245/.minikube/certs/ca.pem
	I1217 20:01:33.303985  641791 main.go:143] libmachine: Decoding PEM data...
	I1217 20:01:33.304004  641791 main.go:143] libmachine: Parsing certificate...
	I1217 20:01:33.304054  641791 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22186-372245/.minikube/certs/cert.pem
	I1217 20:01:33.304094  641791 main.go:143] libmachine: Decoding PEM data...
	I1217 20:01:33.304109  641791 main.go:143] libmachine: Parsing certificate...
	I1217 20:01:33.304461  641791 cli_runner.go:164] Run: docker network inspect embed-certs-147021 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1217 20:01:33.322331  641791 cli_runner.go:211] docker network inspect embed-certs-147021 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1217 20:01:33.322495  641791 network_create.go:284] running [docker network inspect embed-certs-147021] to gather additional debugging logs...
	I1217 20:01:33.322520  641791 cli_runner.go:164] Run: docker network inspect embed-certs-147021
	W1217 20:01:33.343862  641791 cli_runner.go:211] docker network inspect embed-certs-147021 returned with exit code 1
	I1217 20:01:33.343905  641791 network_create.go:287] error running [docker network inspect embed-certs-147021]: docker network inspect embed-certs-147021: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network embed-certs-147021 not found
	I1217 20:01:33.343916  641791 network_create.go:289] output of [docker network inspect embed-certs-147021]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network embed-certs-147021 not found
	
	** /stderr **
	I1217 20:01:33.344032  641791 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1217 20:01:33.365328  641791 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-f64340259533 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:f6:0a:32:70:0d:35} reservation:<nil>}
	I1217 20:01:33.366374  641791 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-67abe6566c60 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:42:82:43:08:7c:e3} reservation:<nil>}
	I1217 20:01:33.367026  641791 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-f76d03f2ebfd IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:8e:bb:9b:fb:af:46} reservation:<nil>}
	I1217 20:01:33.367708  641791 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-4c731e2a052d IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:4e:e6:a7:52:2c:69} reservation:<nil>}
	I1217 20:01:33.368817  641791 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001dcfbf0}
	I1217 20:01:33.368850  641791 network_create.go:124] attempt to create docker network embed-certs-147021 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1217 20:01:33.368903  641791 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-147021 embed-certs-147021
	I1217 20:01:33.419323  641791 network_create.go:108] docker network embed-certs-147021 192.168.85.0/24 created
	I1217 20:01:33.419365  641791 kic.go:121] calculated static IP "192.168.85.2" for the "embed-certs-147021" container
	I1217 20:01:33.419509  641791 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1217 20:01:33.440505  641791 cli_runner.go:164] Run: docker volume create embed-certs-147021 --label name.minikube.sigs.k8s.io=embed-certs-147021 --label created_by.minikube.sigs.k8s.io=true
	I1217 20:01:33.461677  641791 oci.go:103] Successfully created a docker volume embed-certs-147021
	I1217 20:01:33.461789  641791 cli_runner.go:164] Run: docker run --rm --name embed-certs-147021-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-147021 --entrypoint /usr/bin/test -v embed-certs-147021:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 -d /var/lib
	I1217 20:01:35.871968  641791 cli_runner.go:217] Completed: docker run --rm --name embed-certs-147021-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-147021 --entrypoint /usr/bin/test -v embed-certs-147021:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 -d /var/lib: (2.410132025s)
	I1217 20:01:35.872017  641791 oci.go:107] Successfully prepared a docker volume embed-certs-147021
	I1217 20:01:35.872098  641791 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1217 20:01:35.872116  641791 kic.go:194] Starting extracting preloaded images to volume ...
	I1217 20:01:35.872207  641791 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22186-372245/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v embed-certs-147021:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 -I lz4 -xf /preloaded.tar -C /extractDir
	I1217 20:01:35.813311  640931 cli_runner.go:164] Run: docker container inspect newest-cni-420762 --format={{.State.Running}}
	I1217 20:01:35.837635  640931 cli_runner.go:164] Run: docker container inspect newest-cni-420762 --format={{.State.Status}}
	I1217 20:01:35.860576  640931 cli_runner.go:164] Run: docker exec newest-cni-420762 stat /var/lib/dpkg/alternatives/iptables
	I1217 20:01:35.915933  640931 oci.go:144] the created container "newest-cni-420762" has a running status.
	I1217 20:01:35.915966  640931 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22186-372245/.minikube/machines/newest-cni-420762/id_rsa...
	I1217 20:01:36.121206  640931 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22186-372245/.minikube/machines/newest-cni-420762/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1217 20:01:36.159039  640931 cli_runner.go:164] Run: docker container inspect newest-cni-420762 --format={{.State.Status}}
	I1217 20:01:36.188237  640931 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1217 20:01:36.188266  640931 kic_runner.go:114] Args: [docker exec --privileged newest-cni-420762 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1217 20:01:36.263856  640931 cli_runner.go:164] Run: docker container inspect newest-cni-420762 --format={{.State.Status}}
	I1217 20:01:36.282956  640931 machine.go:94] provisionDockerMachine start ...
	I1217 20:01:36.283164  640931 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-420762
	I1217 20:01:36.302311  640931 main.go:143] libmachine: Using SSH client type: native
	I1217 20:01:36.336127  640931 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33458 <nil> <nil>}
	I1217 20:01:36.336158  640931 main.go:143] libmachine: About to run SSH command:
	hostname
	I1217 20:01:36.500207  640931 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-420762
	
	I1217 20:01:36.500240  640931 ubuntu.go:182] provisioning hostname "newest-cni-420762"
	I1217 20:01:36.500328  640931 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-420762
	I1217 20:01:36.525716  640931 main.go:143] libmachine: Using SSH client type: native
	I1217 20:01:36.525999  640931 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33458 <nil> <nil>}
	I1217 20:01:36.526021  640931 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-420762 && echo "newest-cni-420762" | sudo tee /etc/hostname
	I1217 20:01:36.698976  640931 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-420762
	
	I1217 20:01:36.699052  640931 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-420762
	I1217 20:01:36.725326  640931 main.go:143] libmachine: Using SSH client type: native
	I1217 20:01:36.727504  640931 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33458 <nil> <nil>}
	I1217 20:01:36.727545  640931 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-420762' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-420762/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-420762' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1217 20:01:36.883990  640931 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1217 20:01:36.884027  640931 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22186-372245/.minikube CaCertPath:/home/jenkins/minikube-integration/22186-372245/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22186-372245/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22186-372245/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22186-372245/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22186-372245/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22186-372245/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22186-372245/.minikube}
	I1217 20:01:36.884098  640931 ubuntu.go:190] setting up certificates
	I1217 20:01:36.884112  640931 provision.go:84] configureAuth start
	I1217 20:01:36.884190  640931 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-420762
	I1217 20:01:36.905814  640931 provision.go:143] copyHostCerts
	I1217 20:01:36.905873  640931 exec_runner.go:144] found /home/jenkins/minikube-integration/22186-372245/.minikube/ca.pem, removing ...
	I1217 20:01:36.905883  640931 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22186-372245/.minikube/ca.pem
	I1217 20:01:36.905945  640931 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22186-372245/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22186-372245/.minikube/ca.pem (1082 bytes)
	I1217 20:01:36.906089  640931 exec_runner.go:144] found /home/jenkins/minikube-integration/22186-372245/.minikube/cert.pem, removing ...
	I1217 20:01:36.906105  640931 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22186-372245/.minikube/cert.pem
	I1217 20:01:36.906153  640931 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22186-372245/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22186-372245/.minikube/cert.pem (1123 bytes)
	I1217 20:01:36.906271  640931 exec_runner.go:144] found /home/jenkins/minikube-integration/22186-372245/.minikube/key.pem, removing ...
	I1217 20:01:36.906284  640931 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22186-372245/.minikube/key.pem
	I1217 20:01:36.906324  640931 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22186-372245/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22186-372245/.minikube/key.pem (1675 bytes)
	I1217 20:01:36.906412  640931 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22186-372245/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22186-372245/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22186-372245/.minikube/certs/ca-key.pem org=jenkins.newest-cni-420762 san=[127.0.0.1 192.168.103.2 localhost minikube newest-cni-420762]
	I1217 20:01:37.058135  640931 provision.go:177] copyRemoteCerts
	I1217 20:01:37.058216  640931 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1217 20:01:37.058266  640931 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-420762
	I1217 20:01:37.079932  640931 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33458 SSHKeyPath:/home/jenkins/minikube-integration/22186-372245/.minikube/machines/newest-cni-420762/id_rsa Username:docker}
	I1217 20:01:37.187927  640931 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1217 20:01:37.214221  640931 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1217 20:01:37.234108  640931 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1217 20:01:37.253828  640931 provision.go:87] duration metric: took 369.700195ms to configureAuth
	I1217 20:01:37.253862  640931 ubuntu.go:206] setting minikube options for container-runtime
	I1217 20:01:37.254135  640931 config.go:182] Loaded profile config "newest-cni-420762": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1217 20:01:37.254279  640931 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-420762
	I1217 20:01:37.274238  640931 main.go:143] libmachine: Using SSH client type: native
	I1217 20:01:37.274481  640931 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33458 <nil> <nil>}
	I1217 20:01:37.274498  640931 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1217 20:01:37.598195  640931 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1217 20:01:37.598228  640931 machine.go:97] duration metric: took 1.315249121s to provisionDockerMachine
	I1217 20:01:37.598243  640931 client.go:176] duration metric: took 6.714945396s to LocalClient.Create
	I1217 20:01:37.598273  640931 start.go:167] duration metric: took 6.715027088s to libmachine.API.Create "newest-cni-420762"
	I1217 20:01:37.598282  640931 start.go:293] postStartSetup for "newest-cni-420762" (driver="docker")
	I1217 20:01:37.598300  640931 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1217 20:01:37.598442  640931 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1217 20:01:37.598496  640931 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-420762
	I1217 20:01:37.618834  640931 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33458 SSHKeyPath:/home/jenkins/minikube-integration/22186-372245/.minikube/machines/newest-cni-420762/id_rsa Username:docker}
	I1217 20:01:37.724592  640931 ssh_runner.go:195] Run: cat /etc/os-release
	I1217 20:01:37.728796  640931 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1217 20:01:37.728827  640931 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1217 20:01:37.728838  640931 filesync.go:126] Scanning /home/jenkins/minikube-integration/22186-372245/.minikube/addons for local assets ...
	I1217 20:01:37.728903  640931 filesync.go:126] Scanning /home/jenkins/minikube-integration/22186-372245/.minikube/files for local assets ...
	I1217 20:01:37.728988  640931 filesync.go:149] local asset: /home/jenkins/minikube-integration/22186-372245/.minikube/files/etc/ssl/certs/3757972.pem -> 3757972.pem in /etc/ssl/certs
	I1217 20:01:37.729106  640931 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1217 20:01:37.737833  640931 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/files/etc/ssl/certs/3757972.pem --> /etc/ssl/certs/3757972.pem (1708 bytes)
	I1217 20:01:37.760311  640931 start.go:296] duration metric: took 162.010042ms for postStartSetup
	I1217 20:01:37.760761  640931 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-420762
	I1217 20:01:37.779798  640931 profile.go:143] Saving config to /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/newest-cni-420762/config.json ...
	I1217 20:01:37.780207  640931 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 20:01:37.780271  640931 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-420762
	I1217 20:01:37.799586  640931 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33458 SSHKeyPath:/home/jenkins/minikube-integration/22186-372245/.minikube/machines/newest-cni-420762/id_rsa Username:docker}
	I1217 20:01:37.899486  640931 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1217 20:01:37.904223  640931 start.go:128] duration metric: took 7.023187501s to createHost
	I1217 20:01:37.904253  640931 start.go:83] releasing machines lock for "newest-cni-420762", held for 7.023368403s
	I1217 20:01:37.904326  640931 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-420762
	I1217 20:01:37.924196  640931 ssh_runner.go:195] Run: cat /version.json
	I1217 20:01:37.924245  640931 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1217 20:01:37.924279  640931 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-420762
	I1217 20:01:37.924332  640931 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-420762
	I1217 20:01:37.943827  640931 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33458 SSHKeyPath:/home/jenkins/minikube-integration/22186-372245/.minikube/machines/newest-cni-420762/id_rsa Username:docker}
	I1217 20:01:37.944022  640931 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33458 SSHKeyPath:/home/jenkins/minikube-integration/22186-372245/.minikube/machines/newest-cni-420762/id_rsa Username:docker}
	I1217 20:01:38.096489  640931 ssh_runner.go:195] Run: systemctl --version
	I1217 20:01:38.103663  640931 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1217 20:01:38.143918  640931 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1217 20:01:38.149014  640931 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1217 20:01:38.149128  640931 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1217 20:01:38.179639  640931 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1217 20:01:38.179668  640931 start.go:496] detecting cgroup driver to use...
	I1217 20:01:38.179707  640931 detect.go:190] detected "systemd" cgroup driver on host os
	I1217 20:01:38.179760  640931 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1217 20:01:38.197987  640931 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1217 20:01:38.214506  640931 docker.go:218] disabling cri-docker service (if available) ...
	I1217 20:01:38.214580  640931 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1217 20:01:38.233693  640931 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1217 20:01:38.264945  640931 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1217 20:01:38.364165  640931 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1217 20:01:38.480293  640931 docker.go:234] disabling docker service ...
	I1217 20:01:38.480365  640931 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1217 20:01:38.503760  640931 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1217 20:01:38.519994  640931 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1217 20:01:38.668047  640931 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1217 20:01:38.766644  640931 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1217 20:01:38.780269  640931 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1217 20:01:38.795680  640931 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1217 20:01:38.795731  640931 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:01:38.885193  640931 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1217 20:01:38.885271  640931 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:01:38.997851  640931 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:01:39.126110  640931 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:01:39.244735  640931 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1217 20:01:39.253746  640931 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:01:39.362962  640931 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:01:39.489234  640931 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:01:39.619366  640931 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1217 20:01:39.627511  640931 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1217 20:01:39.635961  640931 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 20:01:39.719984  640931 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1217 20:01:40.138808  640931 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1217 20:01:40.138901  640931 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1217 20:01:40.144867  640931 start.go:564] Will wait 60s for crictl version
	I1217 20:01:40.144931  640931 ssh_runner.go:195] Run: which crictl
	I1217 20:01:40.150065  640931 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1217 20:01:40.178978  640931 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1217 20:01:40.179071  640931 ssh_runner.go:195] Run: crio --version
	I1217 20:01:40.218887  640931 ssh_runner.go:195] Run: crio --version
	I1217 20:01:40.261980  640931 out.go:179] * Preparing Kubernetes v1.35.0-rc.1 on CRI-O 1.34.3 ...
	I1217 20:01:40.263223  640931 cli_runner.go:164] Run: docker network inspect newest-cni-420762 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1217 20:01:40.283282  640931 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1217 20:01:40.288119  640931 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 20:01:40.302311  640931 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1217 20:01:40.303761  640931 kubeadm.go:884] updating cluster {Name:newest-cni-420762 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-420762 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:
false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1217 20:01:40.303901  640931 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime crio
	I1217 20:01:40.303958  640931 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 20:01:40.338815  640931 crio.go:514] all images are preloaded for cri-o runtime.
	I1217 20:01:40.338838  640931 crio.go:433] Images already preloaded, skipping extraction
	I1217 20:01:40.338886  640931 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 20:01:40.365019  640931 crio.go:514] all images are preloaded for cri-o runtime.
	I1217 20:01:40.365044  640931 cache_images.go:86] Images are preloaded, skipping loading
	I1217 20:01:40.365052  640931 kubeadm.go:935] updating node { 192.168.103.2 8443 v1.35.0-rc.1 crio true true} ...
	I1217 20:01:40.365173  640931 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-rc.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-420762 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-420762 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1217 20:01:40.365253  640931 ssh_runner.go:195] Run: crio config
	I1217 20:01:40.418191  640931 cni.go:84] Creating CNI manager for ""
	I1217 20:01:40.418220  640931 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1217 20:01:40.418256  640931 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1217 20:01:40.418294  640931 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.35.0-rc.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-420762 NodeName:newest-cni-420762 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1217 20:01:40.418488  640931 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-420762"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-rc.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1217 20:01:40.418564  640931 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-rc.1
	I1217 20:01:40.427114  640931 binaries.go:51] Found k8s binaries, skipping transfer
	I1217 20:01:40.427199  640931 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1217 20:01:40.436786  640931 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (373 bytes)
	I1217 20:01:40.451432  640931 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1217 20:01:40.469613  640931 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2219 bytes)
	I1217 20:01:40.484428  640931 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1217 20:01:40.488657  640931 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 20:01:40.499963  640931 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 20:01:40.591450  640931 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 20:01:40.608018  640931 certs.go:69] Setting up /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/newest-cni-420762 for IP: 192.168.103.2
	I1217 20:01:40.608043  640931 certs.go:195] generating shared ca certs ...
	I1217 20:01:40.608068  640931 certs.go:227] acquiring lock for ca certs: {Name:mk6c0a4a99609de13fb0b54aca94f9165cc7856c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 20:01:40.608274  640931 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22186-372245/.minikube/ca.key
	I1217 20:01:40.608335  640931 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22186-372245/.minikube/proxy-client-ca.key
	I1217 20:01:40.608352  640931 certs.go:257] generating profile certs ...
	I1217 20:01:40.608426  640931 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/newest-cni-420762/client.key
	I1217 20:01:40.608458  640931 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/newest-cni-420762/client.crt with IP's: []
	I1217 20:01:40.711007  640931 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/newest-cni-420762/client.crt ...
	I1217 20:01:40.711049  640931 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/newest-cni-420762/client.crt: {Name:mk84f63aa1b0f523a9901f13703f64b0cb1f4ef6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 20:01:40.711301  640931 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/newest-cni-420762/client.key ...
	I1217 20:01:40.711322  640931 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/newest-cni-420762/client.key: {Name:mk0d56d9d8cb62cac637a6138890b270afda247a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 20:01:40.711444  640931 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/newest-cni-420762/apiserver.key.c28860c5
	I1217 20:01:40.711463  640931 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/newest-cni-420762/apiserver.crt.c28860c5 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.103.2]
	I1217 20:01:40.796524  640931 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/newest-cni-420762/apiserver.crt.c28860c5 ...
	I1217 20:01:40.796562  640931 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/newest-cni-420762/apiserver.crt.c28860c5: {Name:mk342b73deffaee9e2fbed5c9afe4fa3ff2c9d1e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 20:01:40.796773  640931 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/newest-cni-420762/apiserver.key.c28860c5 ...
	I1217 20:01:40.796797  640931 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/newest-cni-420762/apiserver.key.c28860c5: {Name:mk630b063af33492dde95ec0db252159fdfb5ff9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 20:01:40.796964  640931 certs.go:382] copying /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/newest-cni-420762/apiserver.crt.c28860c5 -> /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/newest-cni-420762/apiserver.crt
	I1217 20:01:40.797097  640931 certs.go:386] copying /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/newest-cni-420762/apiserver.key.c28860c5 -> /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/newest-cni-420762/apiserver.key
	I1217 20:01:40.797198  640931 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/newest-cni-420762/proxy-client.key
	I1217 20:01:40.797223  640931 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/newest-cni-420762/proxy-client.crt with IP's: []
	I1217 20:01:40.935326  640931 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/newest-cni-420762/proxy-client.crt ...
	I1217 20:01:40.935354  640931 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/newest-cni-420762/proxy-client.crt: {Name:mkd6349e9dabf7d41fd98c26eefcf53812430d62 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 20:01:40.935534  640931 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/newest-cni-420762/proxy-client.key ...
	I1217 20:01:40.935555  640931 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/newest-cni-420762/proxy-client.key: {Name:mk00ce8d913f210faa95ae17a30d599129fab29d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 20:01:40.935781  640931 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-372245/.minikube/certs/375797.pem (1338 bytes)
	W1217 20:01:40.935823  640931 certs.go:480] ignoring /home/jenkins/minikube-integration/22186-372245/.minikube/certs/375797_empty.pem, impossibly tiny 0 bytes
	I1217 20:01:40.935834  640931 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-372245/.minikube/certs/ca-key.pem (1675 bytes)
	I1217 20:01:40.935857  640931 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-372245/.minikube/certs/ca.pem (1082 bytes)
	I1217 20:01:40.935880  640931 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-372245/.minikube/certs/cert.pem (1123 bytes)
	I1217 20:01:40.935902  640931 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-372245/.minikube/certs/key.pem (1675 bytes)
	I1217 20:01:40.935949  640931 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-372245/.minikube/files/etc/ssl/certs/3757972.pem (1708 bytes)
	I1217 20:01:40.936624  640931 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1217 20:01:40.956911  640931 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1217 20:01:40.976231  640931 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1217 20:01:40.994850  640931 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1217 20:01:41.015190  640931 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/newest-cni-420762/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1217 20:01:41.034947  640931 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/newest-cni-420762/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1217 20:01:41.053714  640931 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/newest-cni-420762/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1217 20:01:41.072316  640931 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/newest-cni-420762/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1217 20:01:41.091917  640931 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 20:01:41.113818  640931 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/certs/375797.pem --> /usr/share/ca-certificates/375797.pem (1338 bytes)
	I1217 20:01:41.132757  640931 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/files/etc/ssl/certs/3757972.pem --> /usr/share/ca-certificates/3757972.pem (1708 bytes)
	I1217 20:01:41.150412  640931 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1217 20:01:41.163344  640931 ssh_runner.go:195] Run: openssl version
	I1217 20:01:41.169521  640931 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:01:41.177036  640931 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1217 20:01:41.184933  640931 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:01:41.188882  640931 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 19:24 /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:01:41.188941  640931 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:01:41.224403  640931 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1217 20:01:41.232838  640931 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1217 20:01:41.240402  640931 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/375797.pem
	I1217 20:01:41.248142  640931 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/375797.pem /etc/ssl/certs/375797.pem
	I1217 20:01:41.255777  640931 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/375797.pem
	I1217 20:01:41.259998  640931 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 17 19:32 /usr/share/ca-certificates/375797.pem
	I1217 20:01:41.260057  640931 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/375797.pem
	I1217 20:01:41.297318  640931 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1217 20:01:41.305778  640931 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/375797.pem /etc/ssl/certs/51391683.0
	I1217 20:01:41.313289  640931 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3757972.pem
	I1217 20:01:41.320661  640931 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3757972.pem /etc/ssl/certs/3757972.pem
	I1217 20:01:41.328096  640931 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3757972.pem
	I1217 20:01:41.332007  640931 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 17 19:32 /usr/share/ca-certificates/3757972.pem
	I1217 20:01:41.332062  640931 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3757972.pem
	I1217 20:01:41.368919  640931 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1217 20:01:41.377860  640931 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/3757972.pem /etc/ssl/certs/3ec20f2e.0
	I1217 20:01:41.386430  640931 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1217 20:01:41.390436  640931 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1217 20:01:41.390499  640931 kubeadm.go:401] StartCluster: {Name:newest-cni-420762 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-420762 Namespace:default APIServerHAVIP: APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fal
se DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 20:01:41.390591  640931 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1217 20:01:41.390635  640931 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 20:01:41.420987  640931 cri.go:89] found id: ""
	I1217 20:01:41.421057  640931 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1217 20:01:41.431465  640931 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1217 20:01:41.443618  640931 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1217 20:01:41.443684  640931 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1217 20:01:41.452667  640931 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1217 20:01:41.452691  640931 kubeadm.go:158] found existing configuration files:
	
	I1217 20:01:41.452746  640931 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1217 20:01:41.462640  640931 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1217 20:01:41.462711  640931 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1217 20:01:41.472438  640931 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1217 20:01:41.480786  640931 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1217 20:01:41.480843  640931 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1217 20:01:41.489970  640931 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1217 20:01:41.498908  640931 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1217 20:01:41.498957  640931 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1217 20:01:41.507849  640931 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1217 20:01:41.516255  640931 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1217 20:01:41.516327  640931 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1217 20:01:41.525020  640931 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1217 20:01:41.571709  640931 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-rc.1
	I1217 20:01:41.571774  640931 kubeadm.go:319] [preflight] Running pre-flight checks
	I1217 20:01:41.648555  640931 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1217 20:01:41.648667  640931 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1045-gcp
	I1217 20:01:41.648725  640931 kubeadm.go:319] OS: Linux
	I1217 20:01:41.648791  640931 kubeadm.go:319] CGROUPS_CPU: enabled
	I1217 20:01:41.648854  640931 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1217 20:01:41.648925  640931 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1217 20:01:41.648990  640931 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1217 20:01:41.649053  640931 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1217 20:01:41.649145  640931 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1217 20:01:41.649218  640931 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1217 20:01:41.649289  640931 kubeadm.go:319] CGROUPS_IO: enabled
	I1217 20:01:41.713573  640931 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1217 20:01:41.713753  640931 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1217 20:01:41.713880  640931 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1217 20:01:41.722706  640931 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1217 20:01:38.257266  596882 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1217 20:01:38.257783  596882 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 20:01:38.257849  596882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 20:01:38.257911  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 20:01:38.304815  596882 cri.go:89] found id: "dfcf129a23a9b4b8338549662d03dc9674e70494089b9acbd56ee28dd0e59a2e"
	I1217 20:01:38.304845  596882 cri.go:89] found id: ""
	I1217 20:01:38.304856  596882 logs.go:282] 1 containers: [dfcf129a23a9b4b8338549662d03dc9674e70494089b9acbd56ee28dd0e59a2e]
	I1217 20:01:38.304991  596882 ssh_runner.go:195] Run: which crictl
	I1217 20:01:38.310464  596882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 20:01:38.310554  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 20:01:38.341636  596882 cri.go:89] found id: ""
	I1217 20:01:38.341667  596882 logs.go:282] 0 containers: []
	W1217 20:01:38.341676  596882 logs.go:284] No container was found matching "etcd"
	I1217 20:01:38.341682  596882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 20:01:38.341745  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 20:01:38.372680  596882 cri.go:89] found id: ""
	I1217 20:01:38.372709  596882 logs.go:282] 0 containers: []
	W1217 20:01:38.372721  596882 logs.go:284] No container was found matching "coredns"
	I1217 20:01:38.372730  596882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 20:01:38.372794  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 20:01:38.407884  596882 cri.go:89] found id: "26afbca819064c614a7c269e4fbe3f73beb12920c9989c7a9adca8a87b8aee29"
	I1217 20:01:38.407912  596882 cri.go:89] found id: ""
	I1217 20:01:38.407923  596882 logs.go:282] 1 containers: [26afbca819064c614a7c269e4fbe3f73beb12920c9989c7a9adca8a87b8aee29]
	I1217 20:01:38.407994  596882 ssh_runner.go:195] Run: which crictl
	I1217 20:01:38.415605  596882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 20:01:38.415720  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 20:01:38.449947  596882 cri.go:89] found id: ""
	I1217 20:01:38.449991  596882 logs.go:282] 0 containers: []
	W1217 20:01:38.450000  596882 logs.go:284] No container was found matching "kube-proxy"
	I1217 20:01:38.450006  596882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 20:01:38.450121  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 20:01:38.482434  596882 cri.go:89] found id: "711081a1b65cc9754b1a9b8fd19fce7769b6a8e65b097e062aa1703f24e1a476"
	I1217 20:01:38.482457  596882 cri.go:89] found id: ""
	I1217 20:01:38.482467  596882 logs.go:282] 1 containers: [711081a1b65cc9754b1a9b8fd19fce7769b6a8e65b097e062aa1703f24e1a476]
	I1217 20:01:38.482528  596882 ssh_runner.go:195] Run: which crictl
	I1217 20:01:38.486835  596882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 20:01:38.486901  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 20:01:38.518995  596882 cri.go:89] found id: ""
	I1217 20:01:38.519027  596882 logs.go:282] 0 containers: []
	W1217 20:01:38.519038  596882 logs.go:284] No container was found matching "kindnet"
	I1217 20:01:38.519046  596882 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1217 20:01:38.519137  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 20:01:38.548129  596882 cri.go:89] found id: ""
	I1217 20:01:38.548162  596882 logs.go:282] 0 containers: []
	W1217 20:01:38.548174  596882 logs.go:284] No container was found matching "storage-provisioner"
	I1217 20:01:38.548186  596882 logs.go:123] Gathering logs for dmesg ...
	I1217 20:01:38.548210  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 20:01:38.564607  596882 logs.go:123] Gathering logs for describe nodes ...
	I1217 20:01:38.564644  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 20:01:38.630570  596882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 20:01:38.630601  596882 logs.go:123] Gathering logs for kube-apiserver [dfcf129a23a9b4b8338549662d03dc9674e70494089b9acbd56ee28dd0e59a2e] ...
	I1217 20:01:38.630620  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 dfcf129a23a9b4b8338549662d03dc9674e70494089b9acbd56ee28dd0e59a2e"
	I1217 20:01:38.664793  596882 logs.go:123] Gathering logs for kube-scheduler [26afbca819064c614a7c269e4fbe3f73beb12920c9989c7a9adca8a87b8aee29] ...
	I1217 20:01:38.664836  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 26afbca819064c614a7c269e4fbe3f73beb12920c9989c7a9adca8a87b8aee29"
	I1217 20:01:38.695844  596882 logs.go:123] Gathering logs for kube-controller-manager [711081a1b65cc9754b1a9b8fd19fce7769b6a8e65b097e062aa1703f24e1a476] ...
	I1217 20:01:38.695884  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 711081a1b65cc9754b1a9b8fd19fce7769b6a8e65b097e062aa1703f24e1a476"
	I1217 20:01:38.730000  596882 logs.go:123] Gathering logs for CRI-O ...
	I1217 20:01:38.730037  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 20:01:38.782137  596882 logs.go:123] Gathering logs for container status ...
	I1217 20:01:38.782172  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 20:01:38.813910  596882 logs.go:123] Gathering logs for kubelet ...
	I1217 20:01:38.813938  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 20:01:41.405259  596882 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1217 20:01:41.405686  596882 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 20:01:41.405742  596882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 20:01:41.405794  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 20:01:41.437393  596882 cri.go:89] found id: "dfcf129a23a9b4b8338549662d03dc9674e70494089b9acbd56ee28dd0e59a2e"
	I1217 20:01:41.437421  596882 cri.go:89] found id: ""
	I1217 20:01:41.437432  596882 logs.go:282] 1 containers: [dfcf129a23a9b4b8338549662d03dc9674e70494089b9acbd56ee28dd0e59a2e]
	I1217 20:01:41.437497  596882 ssh_runner.go:195] Run: which crictl
	I1217 20:01:41.442137  596882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 20:01:41.442227  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 20:01:41.472390  596882 cri.go:89] found id: ""
	I1217 20:01:41.472421  596882 logs.go:282] 0 containers: []
	W1217 20:01:41.472433  596882 logs.go:284] No container was found matching "etcd"
	I1217 20:01:41.472441  596882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 20:01:41.472500  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 20:01:41.502769  596882 cri.go:89] found id: ""
	I1217 20:01:41.502803  596882 logs.go:282] 0 containers: []
	W1217 20:01:41.502813  596882 logs.go:284] No container was found matching "coredns"
	I1217 20:01:41.502822  596882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 20:01:41.502884  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 20:01:41.536294  596882 cri.go:89] found id: "26afbca819064c614a7c269e4fbe3f73beb12920c9989c7a9adca8a87b8aee29"
	I1217 20:01:41.536317  596882 cri.go:89] found id: ""
	I1217 20:01:41.536326  596882 logs.go:282] 1 containers: [26afbca819064c614a7c269e4fbe3f73beb12920c9989c7a9adca8a87b8aee29]
	I1217 20:01:41.536387  596882 ssh_runner.go:195] Run: which crictl
	I1217 20:01:41.540728  596882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 20:01:41.540800  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 20:01:41.571065  596882 cri.go:89] found id: ""
	I1217 20:01:41.571129  596882 logs.go:282] 0 containers: []
	W1217 20:01:41.571141  596882 logs.go:284] No container was found matching "kube-proxy"
	I1217 20:01:41.571150  596882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 20:01:41.571228  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 20:01:41.603808  596882 cri.go:89] found id: "711081a1b65cc9754b1a9b8fd19fce7769b6a8e65b097e062aa1703f24e1a476"
	I1217 20:01:41.603833  596882 cri.go:89] found id: ""
	I1217 20:01:41.603843  596882 logs.go:282] 1 containers: [711081a1b65cc9754b1a9b8fd19fce7769b6a8e65b097e062aa1703f24e1a476]
	I1217 20:01:41.603905  596882 ssh_runner.go:195] Run: which crictl
	I1217 20:01:41.607861  596882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 20:01:41.607916  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 20:01:41.639348  596882 cri.go:89] found id: ""
	I1217 20:01:41.639378  596882 logs.go:282] 0 containers: []
	W1217 20:01:41.639390  596882 logs.go:284] No container was found matching "kindnet"
	I1217 20:01:41.639398  596882 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1217 20:01:41.639466  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 20:01:41.667292  596882 cri.go:89] found id: ""
	I1217 20:01:41.667326  596882 logs.go:282] 0 containers: []
	W1217 20:01:41.667337  596882 logs.go:284] No container was found matching "storage-provisioner"
	I1217 20:01:41.667349  596882 logs.go:123] Gathering logs for dmesg ...
	I1217 20:01:41.667365  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 20:01:41.685375  596882 logs.go:123] Gathering logs for describe nodes ...
	I1217 20:01:41.685408  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 20:01:41.747463  596882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 20:01:41.747487  596882 logs.go:123] Gathering logs for kube-apiserver [dfcf129a23a9b4b8338549662d03dc9674e70494089b9acbd56ee28dd0e59a2e] ...
	I1217 20:01:41.747503  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 dfcf129a23a9b4b8338549662d03dc9674e70494089b9acbd56ee28dd0e59a2e"
	I1217 20:01:41.780325  596882 logs.go:123] Gathering logs for kube-scheduler [26afbca819064c614a7c269e4fbe3f73beb12920c9989c7a9adca8a87b8aee29] ...
	I1217 20:01:41.780369  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 26afbca819064c614a7c269e4fbe3f73beb12920c9989c7a9adca8a87b8aee29"
	I1217 20:01:41.811676  596882 logs.go:123] Gathering logs for kube-controller-manager [711081a1b65cc9754b1a9b8fd19fce7769b6a8e65b097e062aa1703f24e1a476] ...
	I1217 20:01:41.811705  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 711081a1b65cc9754b1a9b8fd19fce7769b6a8e65b097e062aa1703f24e1a476"
	I1217 20:01:41.840322  596882 logs.go:123] Gathering logs for CRI-O ...
	I1217 20:01:41.840356  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 20:01:41.892023  596882 logs.go:123] Gathering logs for container status ...
	I1217 20:01:41.892060  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 20:01:41.925264  596882 logs.go:123] Gathering logs for kubelet ...
	I1217 20:01:41.925293  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 20:01:41.725494  640931 out.go:252]   - Generating certificates and keys ...
	I1217 20:01:41.725607  640931 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1217 20:01:41.725715  640931 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1217 20:01:41.796961  640931 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1217 20:01:41.861381  640931 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1217 20:01:41.929615  640931 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1217 20:01:41.979940  640931 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1217 20:01:42.138965  640931 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1217 20:01:42.139161  640931 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-420762] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1217 20:01:42.177349  640931 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1217 20:01:42.177508  640931 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-420762] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1217 20:01:42.313778  640931 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1217 20:01:42.353160  640931 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1217 20:01:42.406400  640931 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1217 20:01:42.406509  640931 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1217 20:01:42.595919  640931 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1217 20:01:42.609991  640931 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1217 20:01:42.697923  640931 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1217 20:01:42.765434  640931 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1217 20:01:42.812194  640931 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1217 20:01:42.812717  640931 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1217 20:01:42.816490  640931 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1217 20:01:40.011027  641791 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22186-372245/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v embed-certs-147021:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 -I lz4 -xf /preloaded.tar -C /extractDir: (4.138766078s)
	I1217 20:01:40.011065  641791 kic.go:203] duration metric: took 4.138945227s to extract preloaded images to volume ...
	W1217 20:01:40.011182  641791 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1217 20:01:40.011252  641791 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1217 20:01:40.011311  641791 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1217 20:01:40.079456  641791 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-147021 --name embed-certs-147021 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-147021 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-147021 --network embed-certs-147021 --ip 192.168.85.2 --volume embed-certs-147021:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0
	I1217 20:01:40.391595  641791 cli_runner.go:164] Run: docker container inspect embed-certs-147021 --format={{.State.Running}}
	I1217 20:01:40.411970  641791 cli_runner.go:164] Run: docker container inspect embed-certs-147021 --format={{.State.Status}}
	I1217 20:01:40.432777  641791 cli_runner.go:164] Run: docker exec embed-certs-147021 stat /var/lib/dpkg/alternatives/iptables
	I1217 20:01:40.479004  641791 oci.go:144] the created container "embed-certs-147021" has a running status.
	I1217 20:01:40.479060  641791 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22186-372245/.minikube/machines/embed-certs-147021/id_rsa...
	I1217 20:01:40.626491  641791 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22186-372245/.minikube/machines/embed-certs-147021/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1217 20:01:40.657751  641791 cli_runner.go:164] Run: docker container inspect embed-certs-147021 --format={{.State.Status}}
	I1217 20:01:40.679294  641791 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1217 20:01:40.679316  641791 kic_runner.go:114] Args: [docker exec --privileged embed-certs-147021 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1217 20:01:40.738874  641791 cli_runner.go:164] Run: docker container inspect embed-certs-147021 --format={{.State.Status}}
	I1217 20:01:40.759271  641791 machine.go:94] provisionDockerMachine start ...
	I1217 20:01:40.759366  641791 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-147021
	I1217 20:01:40.780468  641791 main.go:143] libmachine: Using SSH client type: native
	I1217 20:01:40.780863  641791 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33463 <nil> <nil>}
	I1217 20:01:40.780878  641791 main.go:143] libmachine: About to run SSH command:
	hostname
	I1217 20:01:40.928256  641791 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-147021
	
	I1217 20:01:40.928289  641791 ubuntu.go:182] provisioning hostname "embed-certs-147021"
	I1217 20:01:40.928374  641791 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-147021
	I1217 20:01:40.949214  641791 main.go:143] libmachine: Using SSH client type: native
	I1217 20:01:40.949533  641791 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33463 <nil> <nil>}
	I1217 20:01:40.949555  641791 main.go:143] libmachine: About to run SSH command:
	sudo hostname embed-certs-147021 && echo "embed-certs-147021" | sudo tee /etc/hostname
	I1217 20:01:41.110379  641791 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-147021
	
	I1217 20:01:41.110482  641791 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-147021
	I1217 20:01:41.129384  641791 main.go:143] libmachine: Using SSH client type: native
	I1217 20:01:41.129613  641791 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33463 <nil> <nil>}
	I1217 20:01:41.129632  641791 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-147021' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-147021/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-147021' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1217 20:01:41.275675  641791 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1217 20:01:41.275712  641791 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22186-372245/.minikube CaCertPath:/home/jenkins/minikube-integration/22186-372245/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22186-372245/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22186-372245/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22186-372245/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22186-372245/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22186-372245/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22186-372245/.minikube}
	I1217 20:01:41.275736  641791 ubuntu.go:190] setting up certificates
	I1217 20:01:41.275746  641791 provision.go:84] configureAuth start
	I1217 20:01:41.275809  641791 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-147021
	I1217 20:01:41.296371  641791 provision.go:143] copyHostCerts
	I1217 20:01:41.296431  641791 exec_runner.go:144] found /home/jenkins/minikube-integration/22186-372245/.minikube/ca.pem, removing ...
	I1217 20:01:41.296448  641791 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22186-372245/.minikube/ca.pem
	I1217 20:01:41.296532  641791 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22186-372245/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22186-372245/.minikube/ca.pem (1082 bytes)
	I1217 20:01:41.296710  641791 exec_runner.go:144] found /home/jenkins/minikube-integration/22186-372245/.minikube/cert.pem, removing ...
	I1217 20:01:41.296725  641791 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22186-372245/.minikube/cert.pem
	I1217 20:01:41.296772  641791 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22186-372245/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22186-372245/.minikube/cert.pem (1123 bytes)
	I1217 20:01:41.296860  641791 exec_runner.go:144] found /home/jenkins/minikube-integration/22186-372245/.minikube/key.pem, removing ...
	I1217 20:01:41.296870  641791 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22186-372245/.minikube/key.pem
	I1217 20:01:41.296906  641791 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22186-372245/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22186-372245/.minikube/key.pem (1675 bytes)
	I1217 20:01:41.296989  641791 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22186-372245/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22186-372245/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22186-372245/.minikube/certs/ca-key.pem org=jenkins.embed-certs-147021 san=[127.0.0.1 192.168.85.2 embed-certs-147021 localhost minikube]
	I1217 20:01:41.363064  641791 provision.go:177] copyRemoteCerts
	I1217 20:01:41.363133  641791 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1217 20:01:41.363175  641791 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-147021
	I1217 20:01:41.382040  641791 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33463 SSHKeyPath:/home/jenkins/minikube-integration/22186-372245/.minikube/machines/embed-certs-147021/id_rsa Username:docker}
	I1217 20:01:41.488616  641791 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1217 20:01:41.513524  641791 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1217 20:01:41.535658  641791 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1217 20:01:41.555360  641791 provision.go:87] duration metric: took 279.597722ms to configureAuth
	I1217 20:01:41.555395  641791 ubuntu.go:206] setting minikube options for container-runtime
	I1217 20:01:41.555714  641791 config.go:182] Loaded profile config "embed-certs-147021": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 20:01:41.555868  641791 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-147021
	I1217 20:01:41.578441  641791 main.go:143] libmachine: Using SSH client type: native
	I1217 20:01:41.578657  641791 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33463 <nil> <nil>}
	I1217 20:01:41.578676  641791 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1217 20:01:41.894463  641791 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1217 20:01:41.894485  641791 machine.go:97] duration metric: took 1.135192869s to provisionDockerMachine
	I1217 20:01:41.894496  641791 client.go:176] duration metric: took 8.590626281s to LocalClient.Create
	I1217 20:01:41.894518  641791 start.go:167] duration metric: took 8.590686351s to libmachine.API.Create "embed-certs-147021"
	I1217 20:01:41.894530  641791 start.go:293] postStartSetup for "embed-certs-147021" (driver="docker")
	I1217 20:01:41.894543  641791 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1217 20:01:41.894613  641791 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1217 20:01:41.894662  641791 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-147021
	I1217 20:01:41.915246  641791 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33463 SSHKeyPath:/home/jenkins/minikube-integration/22186-372245/.minikube/machines/embed-certs-147021/id_rsa Username:docker}
	I1217 20:01:42.020503  641791 ssh_runner.go:195] Run: cat /etc/os-release
	I1217 20:01:42.024271  641791 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1217 20:01:42.024309  641791 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1217 20:01:42.024325  641791 filesync.go:126] Scanning /home/jenkins/minikube-integration/22186-372245/.minikube/addons for local assets ...
	I1217 20:01:42.024391  641791 filesync.go:126] Scanning /home/jenkins/minikube-integration/22186-372245/.minikube/files for local assets ...
	I1217 20:01:42.024509  641791 filesync.go:149] local asset: /home/jenkins/minikube-integration/22186-372245/.minikube/files/etc/ssl/certs/3757972.pem -> 3757972.pem in /etc/ssl/certs
	I1217 20:01:42.024627  641791 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1217 20:01:42.032276  641791 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/files/etc/ssl/certs/3757972.pem --> /etc/ssl/certs/3757972.pem (1708 bytes)
	I1217 20:01:42.051704  641791 start.go:296] duration metric: took 157.157047ms for postStartSetup
	I1217 20:01:42.052160  641791 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-147021
	I1217 20:01:42.070935  641791 profile.go:143] Saving config to /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/embed-certs-147021/config.json ...
	I1217 20:01:42.071263  641791 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 20:01:42.071319  641791 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-147021
	I1217 20:01:42.090968  641791 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33463 SSHKeyPath:/home/jenkins/minikube-integration/22186-372245/.minikube/machines/embed-certs-147021/id_rsa Username:docker}
	I1217 20:01:42.190344  641791 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1217 20:01:42.195674  641791 start.go:128] duration metric: took 8.894917952s to createHost
	I1217 20:01:42.195701  641791 start.go:83] releasing machines lock for "embed-certs-147021", held for 8.895100986s
	I1217 20:01:42.195771  641791 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-147021
	I1217 20:01:42.216290  641791 ssh_runner.go:195] Run: cat /version.json
	I1217 20:01:42.216347  641791 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-147021
	I1217 20:01:42.216356  641791 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1217 20:01:42.216452  641791 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-147021
	I1217 20:01:42.237269  641791 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33463 SSHKeyPath:/home/jenkins/minikube-integration/22186-372245/.minikube/machines/embed-certs-147021/id_rsa Username:docker}
	I1217 20:01:42.237502  641791 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33463 SSHKeyPath:/home/jenkins/minikube-integration/22186-372245/.minikube/machines/embed-certs-147021/id_rsa Username:docker}
	I1217 20:01:42.394757  641791 ssh_runner.go:195] Run: systemctl --version
	I1217 20:01:42.401907  641791 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1217 20:01:42.440821  641791 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1217 20:01:42.446150  641791 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1217 20:01:42.446243  641791 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1217 20:01:42.475309  641791 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1217 20:01:42.475338  641791 start.go:496] detecting cgroup driver to use...
	I1217 20:01:42.475371  641791 detect.go:190] detected "systemd" cgroup driver on host os
	I1217 20:01:42.475438  641791 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1217 20:01:42.493055  641791 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1217 20:01:42.506772  641791 docker.go:218] disabling cri-docker service (if available) ...
	I1217 20:01:42.506847  641791 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1217 20:01:42.524263  641791 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1217 20:01:42.542476  641791 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1217 20:01:42.627503  641791 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1217 20:01:42.719348  641791 docker.go:234] disabling docker service ...
	I1217 20:01:42.719415  641791 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1217 20:01:42.739113  641791 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1217 20:01:42.752766  641791 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1217 20:01:42.850216  641791 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1217 20:01:42.950583  641791 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1217 20:01:42.963490  641791 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1217 20:01:42.979424  641791 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1217 20:01:42.979482  641791 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:01:42.991473  641791 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1217 20:01:42.991546  641791 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:01:43.002796  641791 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:01:43.013860  641791 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:01:43.023185  641791 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1217 20:01:43.032294  641791 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:01:43.041912  641791 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:01:43.056230  641791 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:01:43.065598  641791 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1217 20:01:43.073391  641791 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1217 20:01:43.081123  641791 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 20:01:43.173655  641791 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1217 20:01:43.310493  641791 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1217 20:01:43.310561  641791 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1217 20:01:43.314936  641791 start.go:564] Will wait 60s for crictl version
	I1217 20:01:43.315000  641791 ssh_runner.go:195] Run: which crictl
	I1217 20:01:43.319050  641791 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1217 20:01:43.345355  641791 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1217 20:01:43.345442  641791 ssh_runner.go:195] Run: crio --version
	I1217 20:01:43.375016  641791 ssh_runner.go:195] Run: crio --version
	I1217 20:01:43.408043  641791 out.go:179] * Preparing Kubernetes v1.34.3 on CRI-O 1.34.3 ...
	I1217 20:01:42.817877  640931 out.go:252]   - Booting up control plane ...
	I1217 20:01:42.818032  640931 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1217 20:01:42.818173  640931 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1217 20:01:42.818959  640931 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1217 20:01:42.847225  640931 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1217 20:01:42.847372  640931 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1217 20:01:42.856016  640931 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1217 20:01:42.856275  640931 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1217 20:01:42.856370  640931 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1217 20:01:42.970630  640931 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1217 20:01:42.970826  640931 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1217 20:01:43.472296  640931 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 501.826543ms
	I1217 20:01:43.475725  640931 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1217 20:01:43.475888  640931 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.103.2:8443/livez
	I1217 20:01:43.476032  640931 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1217 20:01:43.476161  640931 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1217 20:01:44.481678  640931 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.005787574s
	I1217 20:01:44.511046  596882 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1217 20:01:44.511428  596882 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 20:01:44.511489  596882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 20:01:44.511541  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 20:01:44.539715  596882 cri.go:89] found id: "dfcf129a23a9b4b8338549662d03dc9674e70494089b9acbd56ee28dd0e59a2e"
	I1217 20:01:44.539739  596882 cri.go:89] found id: ""
	I1217 20:01:44.539749  596882 logs.go:282] 1 containers: [dfcf129a23a9b4b8338549662d03dc9674e70494089b9acbd56ee28dd0e59a2e]
	I1217 20:01:44.539814  596882 ssh_runner.go:195] Run: which crictl
	I1217 20:01:44.543899  596882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 20:01:44.543972  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 20:01:44.572820  596882 cri.go:89] found id: ""
	I1217 20:01:44.572841  596882 logs.go:282] 0 containers: []
	W1217 20:01:44.572849  596882 logs.go:284] No container was found matching "etcd"
	I1217 20:01:44.572856  596882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 20:01:44.572903  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 20:01:44.602369  596882 cri.go:89] found id: ""
	I1217 20:01:44.602399  596882 logs.go:282] 0 containers: []
	W1217 20:01:44.602412  596882 logs.go:284] No container was found matching "coredns"
	I1217 20:01:44.602420  596882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 20:01:44.602488  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 20:01:44.631745  596882 cri.go:89] found id: "26afbca819064c614a7c269e4fbe3f73beb12920c9989c7a9adca8a87b8aee29"
	I1217 20:01:44.631769  596882 cri.go:89] found id: ""
	I1217 20:01:44.631780  596882 logs.go:282] 1 containers: [26afbca819064c614a7c269e4fbe3f73beb12920c9989c7a9adca8a87b8aee29]
	I1217 20:01:44.631844  596882 ssh_runner.go:195] Run: which crictl
	I1217 20:01:44.636392  596882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 20:01:44.636451  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 20:01:44.666315  596882 cri.go:89] found id: ""
	I1217 20:01:44.666340  596882 logs.go:282] 0 containers: []
	W1217 20:01:44.666349  596882 logs.go:284] No container was found matching "kube-proxy"
	I1217 20:01:44.666356  596882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 20:01:44.666416  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 20:01:44.700101  596882 cri.go:89] found id: "711081a1b65cc9754b1a9b8fd19fce7769b6a8e65b097e062aa1703f24e1a476"
	I1217 20:01:44.700125  596882 cri.go:89] found id: ""
	I1217 20:01:44.700137  596882 logs.go:282] 1 containers: [711081a1b65cc9754b1a9b8fd19fce7769b6a8e65b097e062aa1703f24e1a476]
	I1217 20:01:44.700200  596882 ssh_runner.go:195] Run: which crictl
	I1217 20:01:44.705197  596882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 20:01:44.705273  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 20:01:44.746127  596882 cri.go:89] found id: ""
	I1217 20:01:44.746157  596882 logs.go:282] 0 containers: []
	W1217 20:01:44.746230  596882 logs.go:284] No container was found matching "kindnet"
	I1217 20:01:44.746245  596882 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1217 20:01:44.746338  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 20:01:44.783190  596882 cri.go:89] found id: ""
	I1217 20:01:44.783234  596882 logs.go:282] 0 containers: []
	W1217 20:01:44.783244  596882 logs.go:284] No container was found matching "storage-provisioner"
	I1217 20:01:44.783258  596882 logs.go:123] Gathering logs for dmesg ...
	I1217 20:01:44.783275  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 20:01:44.807339  596882 logs.go:123] Gathering logs for describe nodes ...
	I1217 20:01:44.807380  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 20:01:44.885065  596882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 20:01:44.885135  596882 logs.go:123] Gathering logs for kube-apiserver [dfcf129a23a9b4b8338549662d03dc9674e70494089b9acbd56ee28dd0e59a2e] ...
	I1217 20:01:44.885153  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 dfcf129a23a9b4b8338549662d03dc9674e70494089b9acbd56ee28dd0e59a2e"
	I1217 20:01:44.920558  596882 logs.go:123] Gathering logs for kube-scheduler [26afbca819064c614a7c269e4fbe3f73beb12920c9989c7a9adca8a87b8aee29] ...
	I1217 20:01:44.920593  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 26afbca819064c614a7c269e4fbe3f73beb12920c9989c7a9adca8a87b8aee29"
	I1217 20:01:44.954582  596882 logs.go:123] Gathering logs for kube-controller-manager [711081a1b65cc9754b1a9b8fd19fce7769b6a8e65b097e062aa1703f24e1a476] ...
	I1217 20:01:44.954620  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 711081a1b65cc9754b1a9b8fd19fce7769b6a8e65b097e062aa1703f24e1a476"
	I1217 20:01:44.988380  596882 logs.go:123] Gathering logs for CRI-O ...
	I1217 20:01:44.988419  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 20:01:45.054301  596882 logs.go:123] Gathering logs for container status ...
	I1217 20:01:45.054344  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 20:01:45.091065  596882 logs.go:123] Gathering logs for kubelet ...
	I1217 20:01:45.091119  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 20:01:45.743508  640931 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.267719472s
	I1217 20:01:47.477795  640931 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.002079463s
	I1217 20:01:47.498676  640931 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1217 20:01:47.511214  640931 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1217 20:01:47.521869  640931 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1217 20:01:47.522068  640931 kubeadm.go:319] [mark-control-plane] Marking the node newest-cni-420762 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1217 20:01:47.531797  640931 kubeadm.go:319] [bootstrap-token] Using token: xth6iz.qq1qma46tjcjg5ww
	I1217 20:01:43.409257  641791 cli_runner.go:164] Run: docker network inspect embed-certs-147021 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1217 20:01:43.433310  641791 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1217 20:01:43.438535  641791 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 20:01:43.453108  641791 kubeadm.go:884] updating cluster {Name:embed-certs-147021 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:embed-certs-147021 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath
: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1217 20:01:43.453256  641791 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1217 20:01:43.453317  641791 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 20:01:43.489473  641791 crio.go:514] all images are preloaded for cri-o runtime.
	I1217 20:01:43.489501  641791 crio.go:433] Images already preloaded, skipping extraction
	I1217 20:01:43.489551  641791 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 20:01:43.516569  641791 crio.go:514] all images are preloaded for cri-o runtime.
	I1217 20:01:43.516595  641791 cache_images.go:86] Images are preloaded, skipping loading
	I1217 20:01:43.516605  641791 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.3 crio true true} ...
	I1217 20:01:43.516712  641791 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-147021 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.3 ClusterName:embed-certs-147021 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1217 20:01:43.516801  641791 ssh_runner.go:195] Run: crio config
	I1217 20:01:43.565967  641791 cni.go:84] Creating CNI manager for ""
	I1217 20:01:43.565995  641791 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1217 20:01:43.566015  641791 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1217 20:01:43.566040  641791 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-147021 NodeName:embed-certs-147021 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1217 20:01:43.566229  641791 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-147021"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1217 20:01:43.566300  641791 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.3
	I1217 20:01:43.575271  641791 binaries.go:51] Found k8s binaries, skipping transfer
	I1217 20:01:43.575342  641791 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1217 20:01:43.583953  641791 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1217 20:01:43.597723  641791 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1217 20:01:43.613840  641791 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1217 20:01:43.626944  641791 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1217 20:01:43.630635  641791 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 20:01:43.641108  641791 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 20:01:43.720477  641791 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 20:01:43.745670  641791 certs.go:69] Setting up /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/embed-certs-147021 for IP: 192.168.85.2
	I1217 20:01:43.745694  641791 certs.go:195] generating shared ca certs ...
	I1217 20:01:43.745715  641791 certs.go:227] acquiring lock for ca certs: {Name:mk6c0a4a99609de13fb0b54aca94f9165cc7856c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 20:01:43.745876  641791 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22186-372245/.minikube/ca.key
	I1217 20:01:43.745937  641791 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22186-372245/.minikube/proxy-client-ca.key
	I1217 20:01:43.745956  641791 certs.go:257] generating profile certs ...
	I1217 20:01:43.746034  641791 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/embed-certs-147021/client.key
	I1217 20:01:43.746058  641791 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/embed-certs-147021/client.crt with IP's: []
	I1217 20:01:43.786537  641791 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/embed-certs-147021/client.crt ...
	I1217 20:01:43.786574  641791 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/embed-certs-147021/client.crt: {Name:mk46e465dc4b2ef7ec897a9566b65da44bbae127 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 20:01:43.786794  641791 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/embed-certs-147021/client.key ...
	I1217 20:01:43.786811  641791 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/embed-certs-147021/client.key: {Name:mk1ae6d3af8b986dbbc359cdf1cead6c8aeac07c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 20:01:43.786970  641791 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/embed-certs-147021/apiserver.key.45939a3a
	I1217 20:01:43.786998  641791 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/embed-certs-147021/apiserver.crt.45939a3a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1217 20:01:43.825236  641791 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/embed-certs-147021/apiserver.crt.45939a3a ...
	I1217 20:01:43.825269  641791 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/embed-certs-147021/apiserver.crt.45939a3a: {Name:mk096bb8c967189b56c3d4d9cfe6b4eef778af4a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 20:01:43.825477  641791 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/embed-certs-147021/apiserver.key.45939a3a ...
	I1217 20:01:43.825497  641791 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/embed-certs-147021/apiserver.key.45939a3a: {Name:mk154ddc35c4d2370e2c581bc56d813740fc868a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 20:01:43.825620  641791 certs.go:382] copying /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/embed-certs-147021/apiserver.crt.45939a3a -> /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/embed-certs-147021/apiserver.crt
	I1217 20:01:43.825717  641791 certs.go:386] copying /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/embed-certs-147021/apiserver.key.45939a3a -> /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/embed-certs-147021/apiserver.key
	I1217 20:01:43.825786  641791 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/embed-certs-147021/proxy-client.key
	I1217 20:01:43.825806  641791 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/embed-certs-147021/proxy-client.crt with IP's: []
	I1217 20:01:44.169114  641791 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/embed-certs-147021/proxy-client.crt ...
	I1217 20:01:44.169150  641791 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/embed-certs-147021/proxy-client.crt: {Name:mkb0a0f53ac27016e958034635499744000681be Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 20:01:44.169399  641791 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/embed-certs-147021/proxy-client.key ...
	I1217 20:01:44.169421  641791 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/embed-certs-147021/proxy-client.key: {Name:mk9f25bb6091343e005557b204c84dc253d1bc5d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 20:01:44.169645  641791 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-372245/.minikube/certs/375797.pem (1338 bytes)
	W1217 20:01:44.169699  641791 certs.go:480] ignoring /home/jenkins/minikube-integration/22186-372245/.minikube/certs/375797_empty.pem, impossibly tiny 0 bytes
	I1217 20:01:44.169714  641791 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-372245/.minikube/certs/ca-key.pem (1675 bytes)
	I1217 20:01:44.169749  641791 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-372245/.minikube/certs/ca.pem (1082 bytes)
	I1217 20:01:44.169786  641791 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-372245/.minikube/certs/cert.pem (1123 bytes)
	I1217 20:01:44.169827  641791 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-372245/.minikube/certs/key.pem (1675 bytes)
	I1217 20:01:44.169891  641791 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-372245/.minikube/files/etc/ssl/certs/3757972.pem (1708 bytes)
	I1217 20:01:44.170527  641791 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1217 20:01:44.190534  641791 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1217 20:01:44.210639  641791 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1217 20:01:44.228757  641791 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1217 20:01:44.248552  641791 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/embed-certs-147021/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1217 20:01:44.269222  641791 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/embed-certs-147021/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1217 20:01:44.286886  641791 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/embed-certs-147021/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1217 20:01:44.304394  641791 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/embed-certs-147021/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1217 20:01:44.322737  641791 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 20:01:44.343166  641791 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/certs/375797.pem --> /usr/share/ca-certificates/375797.pem (1338 bytes)
	I1217 20:01:44.360978  641791 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/files/etc/ssl/certs/3757972.pem --> /usr/share/ca-certificates/3757972.pem (1708 bytes)
	I1217 20:01:44.379226  641791 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1217 20:01:44.392223  641791 ssh_runner.go:195] Run: openssl version
	I1217 20:01:44.398376  641791 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3757972.pem
	I1217 20:01:44.405700  641791 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3757972.pem /etc/ssl/certs/3757972.pem
	I1217 20:01:44.413517  641791 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3757972.pem
	I1217 20:01:44.417204  641791 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 17 19:32 /usr/share/ca-certificates/3757972.pem
	I1217 20:01:44.417264  641791 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3757972.pem
	I1217 20:01:44.458532  641791 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1217 20:01:44.469710  641791 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/3757972.pem /etc/ssl/certs/3ec20f2e.0
	I1217 20:01:44.479873  641791 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:01:44.489184  641791 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1217 20:01:44.496926  641791 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:01:44.500699  641791 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 19:24 /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:01:44.500762  641791 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:01:44.538725  641791 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1217 20:01:44.547335  641791 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1217 20:01:44.554741  641791 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/375797.pem
	I1217 20:01:44.563274  641791 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/375797.pem /etc/ssl/certs/375797.pem
	I1217 20:01:44.572170  641791 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/375797.pem
	I1217 20:01:44.576235  641791 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 17 19:32 /usr/share/ca-certificates/375797.pem
	I1217 20:01:44.576296  641791 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/375797.pem
	I1217 20:01:44.620952  641791 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1217 20:01:44.630562  641791 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/375797.pem /etc/ssl/certs/51391683.0
	I1217 20:01:44.639742  641791 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1217 20:01:44.643494  641791 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1217 20:01:44.643550  641791 kubeadm.go:401] StartCluster: {Name:embed-certs-147021 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:embed-certs-147021 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: S
ocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 20:01:44.643636  641791 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1217 20:01:44.643690  641791 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 20:01:44.676948  641791 cri.go:89] found id: ""
	I1217 20:01:44.677039  641791 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1217 20:01:44.687282  641791 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1217 20:01:44.697062  641791 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1217 20:01:44.697140  641791 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1217 20:01:44.707469  641791 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1217 20:01:44.707491  641791 kubeadm.go:158] found existing configuration files:
	
	I1217 20:01:44.707568  641791 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1217 20:01:44.719356  641791 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1217 20:01:44.719414  641791 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1217 20:01:44.728233  641791 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1217 20:01:44.741423  641791 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1217 20:01:44.741522  641791 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1217 20:01:44.751958  641791 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1217 20:01:44.763492  641791 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1217 20:01:44.763557  641791 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1217 20:01:44.774054  641791 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1217 20:01:44.784341  641791 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1217 20:01:44.784404  641791 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1217 20:01:44.792696  641791 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1217 20:01:44.873145  641791 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1045-gcp\n", err: exit status 1
	I1217 20:01:44.952033  641791 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1217 20:01:47.533327  640931 out.go:252]   - Configuring RBAC rules ...
	I1217 20:01:47.533507  640931 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1217 20:01:47.537341  640931 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1217 20:01:47.544454  640931 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1217 20:01:47.547344  640931 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1217 20:01:47.551697  640931 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1217 20:01:47.554698  640931 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1217 20:01:47.886368  640931 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1217 20:01:48.303031  640931 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1217 20:01:48.885876  640931 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1217 20:01:48.887060  640931 kubeadm.go:319] 
	I1217 20:01:48.887197  640931 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1217 20:01:48.887208  640931 kubeadm.go:319] 
	I1217 20:01:48.887311  640931 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1217 20:01:48.887326  640931 kubeadm.go:319] 
	I1217 20:01:48.887349  640931 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1217 20:01:48.887422  640931 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1217 20:01:48.887468  640931 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1217 20:01:48.887474  640931 kubeadm.go:319] 
	I1217 20:01:48.887537  640931 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1217 20:01:48.887545  640931 kubeadm.go:319] 
	I1217 20:01:48.887585  640931 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1217 20:01:48.887590  640931 kubeadm.go:319] 
	I1217 20:01:48.887681  640931 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1217 20:01:48.887797  640931 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1217 20:01:48.887888  640931 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1217 20:01:48.887897  640931 kubeadm.go:319] 
	I1217 20:01:48.888038  640931 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1217 20:01:48.888155  640931 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1217 20:01:48.888168  640931 kubeadm.go:319] 
	I1217 20:01:48.888298  640931 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token xth6iz.qq1qma46tjcjg5ww \
	I1217 20:01:48.888428  640931 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:8ef867ecc15c7bd9eb9f87ba84e4b5e1f9c90bbe1fbebab60bd7b5b08cd9129f \
	I1217 20:01:48.888450  640931 kubeadm.go:319] 	--control-plane 
	I1217 20:01:48.888454  640931 kubeadm.go:319] 
	I1217 20:01:48.888520  640931 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1217 20:01:48.888529  640931 kubeadm.go:319] 
	I1217 20:01:48.888592  640931 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token xth6iz.qq1qma46tjcjg5ww \
	I1217 20:01:48.888677  640931 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:8ef867ecc15c7bd9eb9f87ba84e4b5e1f9c90bbe1fbebab60bd7b5b08cd9129f 
	I1217 20:01:48.891648  640931 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1045-gcp\n", err: exit status 1
	I1217 20:01:48.891746  640931 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1217 20:01:48.891763  640931 cni.go:84] Creating CNI manager for ""
	I1217 20:01:48.891771  640931 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1217 20:01:48.893444  640931 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1217 20:01:48.894678  640931 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1217 20:01:48.898938  640931 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl ...
	I1217 20:01:48.898957  640931 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2620 bytes)
	I1217 20:01:48.912917  640931 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1217 20:01:49.133579  640931 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1217 20:01:49.133662  640931 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 20:01:49.133683  640931 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes newest-cni-420762 minikube.k8s.io/updated_at=2025_12_17T20_01_49_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=2e96f676eb7e96389e85fe0658a4ede4c4ba6924 minikube.k8s.io/name=newest-cni-420762 minikube.k8s.io/primary=true
	I1217 20:01:49.144096  640931 ops.go:34] apiserver oom_adj: -16
	I1217 20:01:49.225981  640931 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 20:01:49.726215  640931 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 20:01:50.226600  640931 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 20:01:47.702394  596882 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1217 20:01:47.702857  596882 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 20:01:47.702929  596882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 20:01:47.702993  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 20:01:47.737871  596882 cri.go:89] found id: "dfcf129a23a9b4b8338549662d03dc9674e70494089b9acbd56ee28dd0e59a2e"
	I1217 20:01:47.737895  596882 cri.go:89] found id: ""
	I1217 20:01:47.737905  596882 logs.go:282] 1 containers: [dfcf129a23a9b4b8338549662d03dc9674e70494089b9acbd56ee28dd0e59a2e]
	I1217 20:01:47.737969  596882 ssh_runner.go:195] Run: which crictl
	I1217 20:01:47.742773  596882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 20:01:47.742860  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 20:01:47.777397  596882 cri.go:89] found id: ""
	I1217 20:01:47.777425  596882 logs.go:282] 0 containers: []
	W1217 20:01:47.777436  596882 logs.go:284] No container was found matching "etcd"
	I1217 20:01:47.777444  596882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 20:01:47.777508  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 20:01:47.807514  596882 cri.go:89] found id: ""
	I1217 20:01:47.807543  596882 logs.go:282] 0 containers: []
	W1217 20:01:47.807552  596882 logs.go:284] No container was found matching "coredns"
	I1217 20:01:47.807559  596882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 20:01:47.807613  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 20:01:47.836673  596882 cri.go:89] found id: "26afbca819064c614a7c269e4fbe3f73beb12920c9989c7a9adca8a87b8aee29"
	I1217 20:01:47.836696  596882 cri.go:89] found id: ""
	I1217 20:01:47.836706  596882 logs.go:282] 1 containers: [26afbca819064c614a7c269e4fbe3f73beb12920c9989c7a9adca8a87b8aee29]
	I1217 20:01:47.836760  596882 ssh_runner.go:195] Run: which crictl
	I1217 20:01:47.840998  596882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 20:01:47.841087  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 20:01:47.869439  596882 cri.go:89] found id: ""
	I1217 20:01:47.869465  596882 logs.go:282] 0 containers: []
	W1217 20:01:47.869474  596882 logs.go:284] No container was found matching "kube-proxy"
	I1217 20:01:47.869480  596882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 20:01:47.869528  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 20:01:47.900725  596882 cri.go:89] found id: "711081a1b65cc9754b1a9b8fd19fce7769b6a8e65b097e062aa1703f24e1a476"
	I1217 20:01:47.900749  596882 cri.go:89] found id: ""
	I1217 20:01:47.900760  596882 logs.go:282] 1 containers: [711081a1b65cc9754b1a9b8fd19fce7769b6a8e65b097e062aa1703f24e1a476]
	I1217 20:01:47.900826  596882 ssh_runner.go:195] Run: which crictl
	I1217 20:01:47.904988  596882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 20:01:47.905071  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 20:01:47.941664  596882 cri.go:89] found id: ""
	I1217 20:01:47.941696  596882 logs.go:282] 0 containers: []
	W1217 20:01:47.941708  596882 logs.go:284] No container was found matching "kindnet"
	I1217 20:01:47.941716  596882 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1217 20:01:47.941782  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 20:01:47.973452  596882 cri.go:89] found id: ""
	I1217 20:01:47.973478  596882 logs.go:282] 0 containers: []
	W1217 20:01:47.973489  596882 logs.go:284] No container was found matching "storage-provisioner"
	I1217 20:01:47.973504  596882 logs.go:123] Gathering logs for describe nodes ...
	I1217 20:01:47.973520  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 20:01:48.035681  596882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 20:01:48.035703  596882 logs.go:123] Gathering logs for kube-apiserver [dfcf129a23a9b4b8338549662d03dc9674e70494089b9acbd56ee28dd0e59a2e] ...
	I1217 20:01:48.035720  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 dfcf129a23a9b4b8338549662d03dc9674e70494089b9acbd56ee28dd0e59a2e"
	I1217 20:01:48.074432  596882 logs.go:123] Gathering logs for kube-scheduler [26afbca819064c614a7c269e4fbe3f73beb12920c9989c7a9adca8a87b8aee29] ...
	I1217 20:01:48.074476  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 26afbca819064c614a7c269e4fbe3f73beb12920c9989c7a9adca8a87b8aee29"
	I1217 20:01:48.109843  596882 logs.go:123] Gathering logs for kube-controller-manager [711081a1b65cc9754b1a9b8fd19fce7769b6a8e65b097e062aa1703f24e1a476] ...
	I1217 20:01:48.109880  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 711081a1b65cc9754b1a9b8fd19fce7769b6a8e65b097e062aa1703f24e1a476"
	I1217 20:01:48.149228  596882 logs.go:123] Gathering logs for CRI-O ...
	I1217 20:01:48.149266  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 20:01:48.206203  596882 logs.go:123] Gathering logs for container status ...
	I1217 20:01:48.206243  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 20:01:48.240563  596882 logs.go:123] Gathering logs for kubelet ...
	I1217 20:01:48.240603  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 20:01:48.357687  596882 logs.go:123] Gathering logs for dmesg ...
	I1217 20:01:48.357738  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 20:01:50.878469  596882 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1217 20:01:50.878984  596882 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 20:01:50.879052  596882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 20:01:50.879145  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 20:01:50.912607  596882 cri.go:89] found id: "dfcf129a23a9b4b8338549662d03dc9674e70494089b9acbd56ee28dd0e59a2e"
	I1217 20:01:50.912631  596882 cri.go:89] found id: ""
	I1217 20:01:50.912642  596882 logs.go:282] 1 containers: [dfcf129a23a9b4b8338549662d03dc9674e70494089b9acbd56ee28dd0e59a2e]
	I1217 20:01:50.912704  596882 ssh_runner.go:195] Run: which crictl
	I1217 20:01:50.917030  596882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 20:01:50.917132  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 20:01:50.949169  596882 cri.go:89] found id: ""
	I1217 20:01:50.949203  596882 logs.go:282] 0 containers: []
	W1217 20:01:50.949216  596882 logs.go:284] No container was found matching "etcd"
	I1217 20:01:50.949223  596882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 20:01:50.949285  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 20:01:50.978985  596882 cri.go:89] found id: ""
	I1217 20:01:50.979023  596882 logs.go:282] 0 containers: []
	W1217 20:01:50.979037  596882 logs.go:284] No container was found matching "coredns"
	I1217 20:01:50.979047  596882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 20:01:50.979137  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 20:01:51.007837  596882 cri.go:89] found id: "26afbca819064c614a7c269e4fbe3f73beb12920c9989c7a9adca8a87b8aee29"
	I1217 20:01:51.007865  596882 cri.go:89] found id: ""
	I1217 20:01:51.007875  596882 logs.go:282] 1 containers: [26afbca819064c614a7c269e4fbe3f73beb12920c9989c7a9adca8a87b8aee29]
	I1217 20:01:51.007947  596882 ssh_runner.go:195] Run: which crictl
	I1217 20:01:51.012862  596882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 20:01:51.012937  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 20:01:51.046259  596882 cri.go:89] found id: ""
	I1217 20:01:51.046290  596882 logs.go:282] 0 containers: []
	W1217 20:01:51.046301  596882 logs.go:284] No container was found matching "kube-proxy"
	I1217 20:01:51.046313  596882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 20:01:51.046376  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 20:01:51.078470  596882 cri.go:89] found id: "711081a1b65cc9754b1a9b8fd19fce7769b6a8e65b097e062aa1703f24e1a476"
	I1217 20:01:51.078493  596882 cri.go:89] found id: ""
	I1217 20:01:51.078504  596882 logs.go:282] 1 containers: [711081a1b65cc9754b1a9b8fd19fce7769b6a8e65b097e062aa1703f24e1a476]
	I1217 20:01:51.078565  596882 ssh_runner.go:195] Run: which crictl
	I1217 20:01:51.082880  596882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 20:01:51.082962  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 20:01:51.112418  596882 cri.go:89] found id: ""
	I1217 20:01:51.112449  596882 logs.go:282] 0 containers: []
	W1217 20:01:51.112462  596882 logs.go:284] No container was found matching "kindnet"
	I1217 20:01:51.112470  596882 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1217 20:01:51.112531  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 20:01:51.140833  596882 cri.go:89] found id: ""
	I1217 20:01:51.140869  596882 logs.go:282] 0 containers: []
	W1217 20:01:51.140880  596882 logs.go:284] No container was found matching "storage-provisioner"
	I1217 20:01:51.140894  596882 logs.go:123] Gathering logs for kubelet ...
	I1217 20:01:51.140910  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 20:01:51.230721  596882 logs.go:123] Gathering logs for dmesg ...
	I1217 20:01:51.230771  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 20:01:51.254193  596882 logs.go:123] Gathering logs for describe nodes ...
	I1217 20:01:51.254231  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 20:01:51.332025  596882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 20:01:51.332054  596882 logs.go:123] Gathering logs for kube-apiserver [dfcf129a23a9b4b8338549662d03dc9674e70494089b9acbd56ee28dd0e59a2e] ...
	I1217 20:01:51.332073  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 dfcf129a23a9b4b8338549662d03dc9674e70494089b9acbd56ee28dd0e59a2e"
	I1217 20:01:51.368792  596882 logs.go:123] Gathering logs for kube-scheduler [26afbca819064c614a7c269e4fbe3f73beb12920c9989c7a9adca8a87b8aee29] ...
	I1217 20:01:51.368838  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 26afbca819064c614a7c269e4fbe3f73beb12920c9989c7a9adca8a87b8aee29"
	I1217 20:01:51.407497  596882 logs.go:123] Gathering logs for kube-controller-manager [711081a1b65cc9754b1a9b8fd19fce7769b6a8e65b097e062aa1703f24e1a476] ...
	I1217 20:01:51.407527  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 711081a1b65cc9754b1a9b8fd19fce7769b6a8e65b097e062aa1703f24e1a476"
	I1217 20:01:51.441975  596882 logs.go:123] Gathering logs for CRI-O ...
	I1217 20:01:51.442004  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 20:01:51.498149  596882 logs.go:123] Gathering logs for container status ...
	I1217 20:01:51.498201  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 20:01:50.726115  640931 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 20:01:51.226135  640931 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 20:01:51.726317  640931 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 20:01:52.228277  640931 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 20:01:52.726654  640931 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 20:01:53.226576  640931 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 20:01:53.726056  640931 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 20:01:53.812544  640931 kubeadm.go:1114] duration metric: took 4.678930042s to wait for elevateKubeSystemPrivileges
	I1217 20:01:53.812612  640931 kubeadm.go:403] duration metric: took 12.422108525s to StartCluster
	I1217 20:01:53.812639  640931 settings.go:142] acquiring lock: {Name:mk01c60672ff2b8f50b037d6096a0a4590636830 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 20:01:53.812729  640931 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22186-372245/kubeconfig
	I1217 20:01:53.814370  640931 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-372245/kubeconfig: {Name:mkbe8926b9014d2af611aee93b1188b72880b6c1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 20:01:53.814660  640931 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1217 20:01:53.814679  640931 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1217 20:01:53.814749  640931 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1217 20:01:53.814883  640931 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-420762"
	I1217 20:01:53.814905  640931 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-420762"
	I1217 20:01:53.814908  640931 config.go:182] Loaded profile config "newest-cni-420762": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1217 20:01:53.814916  640931 addons.go:70] Setting default-storageclass=true in profile "newest-cni-420762"
	I1217 20:01:53.814940  640931 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-420762"
	I1217 20:01:53.814945  640931 host.go:66] Checking if "newest-cni-420762" exists ...
	I1217 20:01:53.815372  640931 cli_runner.go:164] Run: docker container inspect newest-cni-420762 --format={{.State.Status}}
	I1217 20:01:53.815545  640931 cli_runner.go:164] Run: docker container inspect newest-cni-420762 --format={{.State.Status}}
	I1217 20:01:53.816323  640931 out.go:179] * Verifying Kubernetes components...
	I1217 20:01:53.817869  640931 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 20:01:53.844318  640931 addons.go:239] Setting addon default-storageclass=true in "newest-cni-420762"
	I1217 20:01:53.844367  640931 host.go:66] Checking if "newest-cni-420762" exists ...
	I1217 20:01:53.844856  640931 cli_runner.go:164] Run: docker container inspect newest-cni-420762 --format={{.State.Status}}
	I1217 20:01:53.846439  640931 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1217 20:01:53.848226  640931 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 20:01:53.848250  640931 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1217 20:01:53.848316  640931 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-420762
	I1217 20:01:53.875835  640931 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1217 20:01:53.875981  640931 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1217 20:01:53.876127  640931 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-420762
	I1217 20:01:53.877902  640931 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33458 SSHKeyPath:/home/jenkins/minikube-integration/22186-372245/.minikube/machines/newest-cni-420762/id_rsa Username:docker}
	I1217 20:01:53.905393  640931 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33458 SSHKeyPath:/home/jenkins/minikube-integration/22186-372245/.minikube/machines/newest-cni-420762/id_rsa Username:docker}
	I1217 20:01:53.920868  640931 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.103.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1217 20:01:53.992115  640931 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 20:01:54.002939  640931 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 20:01:54.029299  640931 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1217 20:01:54.126687  640931 start.go:977] {"host.minikube.internal": 192.168.103.1} host record injected into CoreDNS's ConfigMap
	I1217 20:01:54.127984  640931 api_server.go:52] waiting for apiserver process to appear ...
	I1217 20:01:54.128055  640931 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:01:54.369190  640931 api_server.go:72] duration metric: took 554.473486ms to wait for apiserver process to appear ...
	I1217 20:01:54.369228  640931 api_server.go:88] waiting for apiserver healthz status ...
	I1217 20:01:54.369251  640931 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1217 20:01:54.377334  640931 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1217 20:01:54.378594  640931 api_server.go:141] control plane version: v1.35.0-rc.1
	I1217 20:01:54.378626  640931 api_server.go:131] duration metric: took 9.390014ms to wait for apiserver health ...
	I1217 20:01:54.378641  640931 system_pods.go:43] waiting for kube-system pods to appear ...
	I1217 20:01:54.379892  640931 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1217 20:01:54.382219  640931 system_pods.go:59] 8 kube-system pods found
	I1217 20:01:54.382249  640931 system_pods.go:61] "coredns-7d764666f9-jsv2j" [262483f9-bcc1-4054-871a-16cfad4a4abd] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1217 20:01:54.382258  640931 system_pods.go:61] "etcd-newest-cni-420762" [70516caa-a886-4a08-95db-bc22f8c6a7d3] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1217 20:01:54.382269  640931 system_pods.go:61] "kindnet-2f44p" [1888eaab-a42f-4c23-87e4-6c698a41af87] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1217 20:01:54.382289  640931 system_pods.go:61] "kube-apiserver-newest-cni-420762" [8fa67084-5bff-41b5-bdfa-65290314913d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1217 20:01:54.382302  640931 system_pods.go:61] "kube-controller-manager-newest-cni-420762" [732ac716-843a-468b-8ed7-4b94e35445d0] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1217 20:01:54.382322  640931 system_pods.go:61] "kube-proxy-qpt8z" [5bbdb455-62b1-48ac-a4d9-b930a3dc010f] Running
	I1217 20:01:54.382328  640931 system_pods.go:61] "kube-scheduler-newest-cni-420762" [ae106497-db01-4129-ad94-7e637ad3278c] Running
	I1217 20:01:54.382339  640931 system_pods.go:61] "storage-provisioner" [4d3bd70b-556b-4c14-a933-2636b424730f] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1217 20:01:54.382347  640931 system_pods.go:74] duration metric: took 3.698274ms to wait for pod list to return data ...
	I1217 20:01:54.382361  640931 default_sa.go:34] waiting for default service account to be created ...
	I1217 20:01:54.382914  640931 addons.go:530] duration metric: took 568.165901ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1217 20:01:54.385322  640931 default_sa.go:45] found service account: "default"
	I1217 20:01:54.385346  640931 default_sa.go:55] duration metric: took 2.972682ms for default service account to be created ...
	I1217 20:01:54.385358  640931 kubeadm.go:587] duration metric: took 570.648726ms to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1217 20:01:54.385375  640931 node_conditions.go:102] verifying NodePressure condition ...
	I1217 20:01:54.388239  640931 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1217 20:01:54.388272  640931 node_conditions.go:123] node cpu capacity is 8
	I1217 20:01:54.388293  640931 node_conditions.go:105] duration metric: took 2.910906ms to run NodePressure ...
	I1217 20:01:54.388311  640931 start.go:242] waiting for startup goroutines ...
	I1217 20:01:54.632222  640931 kapi.go:214] "coredns" deployment in "kube-system" namespace and "newest-cni-420762" context rescaled to 1 replicas
	I1217 20:01:54.632261  640931 start.go:247] waiting for cluster config update ...
	I1217 20:01:54.632277  640931 start.go:256] writing updated cluster config ...
	I1217 20:01:54.632689  640931 ssh_runner.go:195] Run: rm -f paused
	I1217 20:01:54.701420  640931 start.go:625] kubectl: 1.35.0, cluster: 1.35.0-rc.1 (minor skew: 0)
	I1217 20:01:54.703669  640931 out.go:179] * Done! kubectl is now configured to use "newest-cni-420762" cluster and "default" namespace by default
	I1217 20:01:55.559543  641791 kubeadm.go:319] [init] Using Kubernetes version: v1.34.3
	I1217 20:01:55.559619  641791 kubeadm.go:319] [preflight] Running pre-flight checks
	I1217 20:01:55.559741  641791 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1217 20:01:55.559826  641791 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1045-gcp
	I1217 20:01:55.559871  641791 kubeadm.go:319] OS: Linux
	I1217 20:01:55.559962  641791 kubeadm.go:319] CGROUPS_CPU: enabled
	I1217 20:01:55.560060  641791 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1217 20:01:55.560181  641791 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1217 20:01:55.560297  641791 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1217 20:01:55.560370  641791 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1217 20:01:55.560450  641791 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1217 20:01:55.560547  641791 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1217 20:01:55.560626  641791 kubeadm.go:319] CGROUPS_IO: enabled
	I1217 20:01:55.560710  641791 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1217 20:01:55.560815  641791 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1217 20:01:55.560935  641791 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1217 20:01:55.561010  641791 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1217 20:01:55.563071  641791 out.go:252]   - Generating certificates and keys ...
	I1217 20:01:55.563187  641791 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1217 20:01:55.563289  641791 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1217 20:01:55.563389  641791 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1217 20:01:55.563480  641791 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1217 20:01:55.563575  641791 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1217 20:01:55.563647  641791 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1217 20:01:55.563733  641791 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1217 20:01:55.563905  641791 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [embed-certs-147021 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1217 20:01:55.563981  641791 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1217 20:01:55.564187  641791 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [embed-certs-147021 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1217 20:01:55.564305  641791 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1217 20:01:55.564395  641791 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1217 20:01:55.564463  641791 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1217 20:01:55.564550  641791 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1217 20:01:55.564638  641791 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1217 20:01:55.564740  641791 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1217 20:01:55.564836  641791 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1217 20:01:55.564956  641791 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1217 20:01:55.565043  641791 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1217 20:01:55.565201  641791 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1217 20:01:55.565309  641791 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1217 20:01:55.566763  641791 out.go:252]   - Booting up control plane ...
	I1217 20:01:55.566872  641791 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1217 20:01:55.566967  641791 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1217 20:01:55.567060  641791 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1217 20:01:55.567203  641791 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1217 20:01:55.567286  641791 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1217 20:01:55.567375  641791 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1217 20:01:55.567443  641791 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1217 20:01:55.567474  641791 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1217 20:01:55.567707  641791 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1217 20:01:55.568380  641791 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1217 20:01:55.568460  641791 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 502.188409ms
	I1217 20:01:55.568536  641791 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1217 20:01:55.568634  641791 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1217 20:01:55.568777  641791 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1217 20:01:55.568897  641791 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1217 20:01:55.569007  641791 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.119140976s
	I1217 20:01:55.569127  641791 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.229049917s
	I1217 20:01:55.569218  641791 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.002177205s
	I1217 20:01:55.569352  641791 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1217 20:01:55.569518  641791 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1217 20:01:55.569591  641791 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1217 20:01:55.569825  641791 kubeadm.go:319] [mark-control-plane] Marking the node embed-certs-147021 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1217 20:01:55.569910  641791 kubeadm.go:319] [bootstrap-token] Using token: ermbp3.k39u12rdt78f0qrm
	I1217 20:01:55.572214  641791 out.go:252]   - Configuring RBAC rules ...
	I1217 20:01:55.572393  641791 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1217 20:01:55.572510  641791 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1217 20:01:55.572699  641791 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1217 20:01:55.573494  641791 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1217 20:01:55.573678  641791 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1217 20:01:55.573811  641791 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1217 20:01:55.573983  641791 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1217 20:01:55.574045  641791 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1217 20:01:55.574137  641791 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1217 20:01:55.574178  641791 kubeadm.go:319] 
	I1217 20:01:55.574258  641791 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1217 20:01:55.574268  641791 kubeadm.go:319] 
	I1217 20:01:55.574389  641791 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1217 20:01:55.574409  641791 kubeadm.go:319] 
	I1217 20:01:55.574444  641791 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1217 20:01:55.574533  641791 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1217 20:01:55.574604  641791 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1217 20:01:55.574616  641791 kubeadm.go:319] 
	I1217 20:01:55.574731  641791 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1217 20:01:55.574759  641791 kubeadm.go:319] 
	I1217 20:01:55.574817  641791 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1217 20:01:55.574822  641791 kubeadm.go:319] 
	I1217 20:01:55.574875  641791 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1217 20:01:55.574975  641791 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1217 20:01:55.575065  641791 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1217 20:01:55.575102  641791 kubeadm.go:319] 
	I1217 20:01:55.575241  641791 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1217 20:01:55.575350  641791 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1217 20:01:55.575361  641791 kubeadm.go:319] 
	I1217 20:01:55.575466  641791 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token ermbp3.k39u12rdt78f0qrm \
	I1217 20:01:55.575607  641791 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:8ef867ecc15c7bd9eb9f87ba84e4b5e1f9c90bbe1fbebab60bd7b5b08cd9129f \
	I1217 20:01:55.575639  641791 kubeadm.go:319] 	--control-plane 
	I1217 20:01:55.575647  641791 kubeadm.go:319] 
	I1217 20:01:55.575755  641791 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1217 20:01:55.575766  641791 kubeadm.go:319] 
	I1217 20:01:55.575883  641791 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token ermbp3.k39u12rdt78f0qrm \
	I1217 20:01:55.576017  641791 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:8ef867ecc15c7bd9eb9f87ba84e4b5e1f9c90bbe1fbebab60bd7b5b08cd9129f 
	I1217 20:01:55.576032  641791 cni.go:84] Creating CNI manager for ""
	I1217 20:01:55.576045  641791 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1217 20:01:55.577617  641791 out.go:179] * Configuring CNI (Container Networking Interface) ...
	
	
	==> CRI-O <==
	Dec 17 20:01:54 newest-cni-420762 crio[773]: time="2025-12-17T20:01:54.057971299Z" level=info msg="Ran pod sandbox 99f619c3c812d5523a7ce497a3bc4f8f0cc716b1c0c911e9d10ef53e7e591ae5 with infra container: kube-system/kube-proxy-qpt8z/POD" id=23504301-da71-4afa-bef0-0f4512afe987 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 17 20:01:54 newest-cni-420762 crio[773]: time="2025-12-17T20:01:54.060749077Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-rc.1" id=5cc2a6aa-d59b-4156-bd6b-2f0e1fe80cb6 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 20:01:54 newest-cni-420762 crio[773]: time="2025-12-17T20:01:54.061208012Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88" id=1a4ee929-55c2-41bc-b954-a3cf359d58b2 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 20:01:54 newest-cni-420762 crio[773]: time="2025-12-17T20:01:54.061457991Z" level=info msg="Image docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88 not found" id=1a4ee929-55c2-41bc-b954-a3cf359d58b2 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 20:01:54 newest-cni-420762 crio[773]: time="2025-12-17T20:01:54.061541964Z" level=info msg="Neither image nor artfiact docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88 found" id=1a4ee929-55c2-41bc-b954-a3cf359d58b2 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 20:01:54 newest-cni-420762 crio[773]: time="2025-12-17T20:01:54.06274051Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-rc.1" id=f9d68c7a-1d66-4ede-9c1f-f39d63954cdb name=/runtime.v1.ImageService/ImageStatus
	Dec 17 20:01:54 newest-cni-420762 crio[773]: time="2025-12-17T20:01:54.064617205Z" level=info msg="Pulling image: docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88" id=a2ee2b1b-6983-4470-a455-130fffd43f81 name=/runtime.v1.ImageService/PullImage
	Dec 17 20:01:54 newest-cni-420762 crio[773]: time="2025-12-17T20:01:54.067262202Z" level=info msg="Creating container: kube-system/kube-proxy-qpt8z/kube-proxy" id=b4bc39f0-f2a2-43fb-aa7e-a00bd336c785 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 17 20:01:54 newest-cni-420762 crio[773]: time="2025-12-17T20:01:54.067442827Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 20:01:54 newest-cni-420762 crio[773]: time="2025-12-17T20:01:54.07238526Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 20:01:54 newest-cni-420762 crio[773]: time="2025-12-17T20:01:54.07301809Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 20:01:54 newest-cni-420762 crio[773]: time="2025-12-17T20:01:54.076439863Z" level=info msg="Trying to access \"docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88\""
	Dec 17 20:01:54 newest-cni-420762 crio[773]: time="2025-12-17T20:01:54.118822741Z" level=info msg="Created container 8b98d6d1ccaaa903e09a672515ee2e27ec712395095d9d00c9749723a1de1989: kube-system/kube-proxy-qpt8z/kube-proxy" id=b4bc39f0-f2a2-43fb-aa7e-a00bd336c785 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 17 20:01:54 newest-cni-420762 crio[773]: time="2025-12-17T20:01:54.120579851Z" level=info msg="Starting container: 8b98d6d1ccaaa903e09a672515ee2e27ec712395095d9d00c9749723a1de1989" id=91cc598b-24b1-4803-b152-ccb470224cc1 name=/runtime.v1.RuntimeService/StartContainer
	Dec 17 20:01:54 newest-cni-420762 crio[773]: time="2025-12-17T20:01:54.123995371Z" level=info msg="Started container" PID=1574 containerID=8b98d6d1ccaaa903e09a672515ee2e27ec712395095d9d00c9749723a1de1989 description=kube-system/kube-proxy-qpt8z/kube-proxy id=91cc598b-24b1-4803-b152-ccb470224cc1 name=/runtime.v1.RuntimeService/StartContainer sandboxID=99f619c3c812d5523a7ce497a3bc4f8f0cc716b1c0c911e9d10ef53e7e591ae5
	Dec 17 20:01:55 newest-cni-420762 crio[773]: time="2025-12-17T20:01:55.713706927Z" level=info msg="Pulled image: docker.io/kindest/kindnetd@sha256:7c22558dc06a570d46ea6e8a73b23cdc754eb81f7c08d3441a3171ad359ffc27" id=a2ee2b1b-6983-4470-a455-130fffd43f81 name=/runtime.v1.ImageService/PullImage
	Dec 17 20:01:55 newest-cni-420762 crio[773]: time="2025-12-17T20:01:55.714647498Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88" id=9d4915ad-98fc-4af5-a4a0-4ca56ae9f282 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 20:01:55 newest-cni-420762 crio[773]: time="2025-12-17T20:01:55.717349897Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88" id=eff3561a-f45e-4465-978e-a617614cbfaf name=/runtime.v1.ImageService/ImageStatus
	Dec 17 20:01:55 newest-cni-420762 crio[773]: time="2025-12-17T20:01:55.723530532Z" level=info msg="Creating container: kube-system/kindnet-2f44p/kindnet-cni" id=411b1613-a810-441e-8195-0098d342f702 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 17 20:01:55 newest-cni-420762 crio[773]: time="2025-12-17T20:01:55.72366008Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 20:01:55 newest-cni-420762 crio[773]: time="2025-12-17T20:01:55.729591841Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 20:01:55 newest-cni-420762 crio[773]: time="2025-12-17T20:01:55.730187362Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 20:01:55 newest-cni-420762 crio[773]: time="2025-12-17T20:01:55.76582406Z" level=info msg="Created container 34f5b45b11725f33ef6b724bae1bbbe008974735b2a28391e55d594e555c18f3: kube-system/kindnet-2f44p/kindnet-cni" id=411b1613-a810-441e-8195-0098d342f702 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 17 20:01:55 newest-cni-420762 crio[773]: time="2025-12-17T20:01:55.767005548Z" level=info msg="Starting container: 34f5b45b11725f33ef6b724bae1bbbe008974735b2a28391e55d594e555c18f3" id=76b7e267-7107-47c8-b548-f4f2b375edf7 name=/runtime.v1.RuntimeService/StartContainer
	Dec 17 20:01:55 newest-cni-420762 crio[773]: time="2025-12-17T20:01:55.770820455Z" level=info msg="Started container" PID=1826 containerID=34f5b45b11725f33ef6b724bae1bbbe008974735b2a28391e55d594e555c18f3 description=kube-system/kindnet-2f44p/kindnet-cni id=76b7e267-7107-47c8-b548-f4f2b375edf7 name=/runtime.v1.RuntimeService/StartContainer sandboxID=66583b8e5183eaffe19b7cafc8a098418a99246f5778cf77c93a8d6913384748
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                CREATED                  STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	34f5b45b11725       docker.io/kindest/kindnetd@sha256:7c22558dc06a570d46ea6e8a73b23cdc754eb81f7c08d3441a3171ad359ffc27   Less than a second ago   Running             kindnet-cni               0                   66583b8e5183e       kindnet-2f44p                               kube-system
	8b98d6d1ccaaa       af0321f3a4f388cfb978464739c323ebf891a7b0b50cdfd7179e92f141dad42a                                     2 seconds ago            Running             kube-proxy                0                   99f619c3c812d       kube-proxy-qpt8z                            kube-system
	271f1c60f35a9       5032a56602e1b9bd8856699701b6148aa1b9901d05b61f893df3b57f84aca614                                     12 seconds ago           Running             kube-controller-manager   0                   dfcf5b034f388       kube-controller-manager-newest-cni-420762   kube-system
	31c49d44be635       0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2                                     12 seconds ago           Running             etcd                      0                   a6e908534fa8c       etcd-newest-cni-420762                      kube-system
	e0bd9aa355124       58865405a13bccac1d74bc3f446dddd22e6ef0d7ee8b52363c86dd31838976ce                                     12 seconds ago           Running             kube-apiserver            0                   6ae670b964c5d       kube-apiserver-newest-cni-420762            kube-system
	a7ea708908b2f       73f80cdc073daa4d501207f9e6dec1fa9eea5f27e8d347b8a0c4bad8811eecdc                                     12 seconds ago           Running             kube-scheduler            0                   27b81a83f08d7       kube-scheduler-newest-cni-420762            kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-420762
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=newest-cni-420762
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2e96f676eb7e96389e85fe0658a4ede4c4ba6924
	                    minikube.k8s.io/name=newest-cni-420762
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_17T20_01_49_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Dec 2025 20:01:45 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-420762
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Dec 2025 20:01:48 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Dec 2025 20:01:48 +0000   Wed, 17 Dec 2025 20:01:44 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Dec 2025 20:01:48 +0000   Wed, 17 Dec 2025 20:01:44 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Dec 2025 20:01:48 +0000   Wed, 17 Dec 2025 20:01:44 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Wed, 17 Dec 2025 20:01:48 +0000   Wed, 17 Dec 2025 20:01:44 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    newest-cni-420762
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 99cc213c06a11cdf07b2a4d26942818a
	  System UUID:                9a0da974-6b92-462d-a556-ee8264e627f2
	  Boot ID:                    832664c8-407a-4bff-a432-3bbc3f20421e
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.35.0-rc.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-420762                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         8s
	  kube-system                 kindnet-2f44p                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      3s
	  kube-system                 kube-apiserver-newest-cni-420762             250m (3%)     0 (0%)      0 (0%)           0 (0%)         8s
	  kube-system                 kube-controller-manager-newest-cni-420762    200m (2%)     0 (0%)      0 (0%)           0 (0%)         8s
	  kube-system                 kube-proxy-qpt8z                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         3s
	  kube-system                 kube-scheduler-newest-cni-420762             100m (1%)     0 (0%)      0 (0%)           0 (0%)         8s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  4s    node-controller  Node newest-cni-420762 event: Registered Node newest-cni-420762 in Controller
	
	
	==> dmesg <==
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 02 bf cf fd 8a f3 08 06
	[  +0.000372] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 46 d7 50 f9 50 96 08 06
	[Dec17 19:26] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000011] ll header: 00000000: 12 b8 6e 1b fb 93 de a2 46 23 bd 1e 08 00
	[  +1.015318] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 12 b8 6e 1b fb 93 de a2 46 23 bd 1e 08 00
	[  +1.023837] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 12 b8 6e 1b fb 93 de a2 46 23 bd 1e 08 00
	[  +1.023872] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 12 b8 6e 1b fb 93 de a2 46 23 bd 1e 08 00
	[  +1.023881] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 12 b8 6e 1b fb 93 de a2 46 23 bd 1e 08 00
	[  +1.023899] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 12 b8 6e 1b fb 93 de a2 46 23 bd 1e 08 00
	[  +2.047807] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: 12 b8 6e 1b fb 93 de a2 46 23 bd 1e 08 00
	[  +4.031540] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000022] ll header: 00000000: 12 b8 6e 1b fb 93 de a2 46 23 bd 1e 08 00
	[  +8.319118] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: 12 b8 6e 1b fb 93 de a2 46 23 bd 1e 08 00
	[ +16.382218] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 12 b8 6e 1b fb 93 de a2 46 23 bd 1e 08 00
	[Dec17 19:27] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 12 b8 6e 1b fb 93 de a2 46 23 bd 1e 08 00
	
	
	==> etcd [31c49d44be635316a851bc3207ac9ca86350ac3dc9457b34f701361580e32396] <==
	{"level":"info","ts":"2025-12-17T20:01:44.008842Z","caller":"membership/cluster.go:424","msg":"added member","cluster-id":"3336683c081d149d","local-member-id":"f23060b075c4c089","added-peer-id":"f23060b075c4c089","added-peer-peer-urls":["https://192.168.103.2:2380"],"added-peer-is-learner":false}
	{"level":"info","ts":"2025-12-17T20:01:44.699155Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"f23060b075c4c089 is starting a new election at term 1"}
	{"level":"info","ts":"2025-12-17T20:01:44.699255Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"f23060b075c4c089 became pre-candidate at term 1"}
	{"level":"info","ts":"2025-12-17T20:01:44.699336Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"f23060b075c4c089 received MsgPreVoteResp from f23060b075c4c089 at term 1"}
	{"level":"info","ts":"2025-12-17T20:01:44.699352Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"f23060b075c4c089 has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-17T20:01:44.699371Z","logger":"raft","caller":"v3@v3.6.0/raft.go:912","msg":"f23060b075c4c089 became candidate at term 2"}
	{"level":"info","ts":"2025-12-17T20:01:44.700071Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"f23060b075c4c089 received MsgVoteResp from f23060b075c4c089 at term 2"}
	{"level":"info","ts":"2025-12-17T20:01:44.700246Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"f23060b075c4c089 has received 1 MsgVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-17T20:01:44.700324Z","logger":"raft","caller":"v3@v3.6.0/raft.go:970","msg":"f23060b075c4c089 became leader at term 2"}
	{"level":"info","ts":"2025-12-17T20:01:44.700379Z","logger":"raft","caller":"v3@v3.6.0/node.go:370","msg":"raft.node: f23060b075c4c089 elected leader f23060b075c4c089 at term 2"}
	{"level":"info","ts":"2025-12-17T20:01:44.701047Z","caller":"etcdserver/server.go:2420","msg":"setting up initial cluster version using v3 API","cluster-version":"3.6"}
	{"level":"info","ts":"2025-12-17T20:01:44.701651Z","caller":"etcdserver/server.go:1820","msg":"published local member to cluster through raft","local-member-id":"f23060b075c4c089","local-member-attributes":"{Name:newest-cni-420762 ClientURLs:[https://192.168.103.2:2379]}","cluster-id":"3336683c081d149d","publish-timeout":"7s"}
	{"level":"info","ts":"2025-12-17T20:01:44.701748Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-17T20:01:44.701726Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-17T20:01:44.701969Z","caller":"membership/cluster.go:682","msg":"set initial cluster version","cluster-id":"3336683c081d149d","local-member-id":"f23060b075c4c089","cluster-version":"3.6"}
	{"level":"info","ts":"2025-12-17T20:01:44.702012Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-12-17T20:01:44.702037Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-12-17T20:01:44.702093Z","caller":"api/capability.go:76","msg":"enabled capabilities for version","cluster-version":"3.6"}
	{"level":"info","ts":"2025-12-17T20:01:44.702208Z","caller":"etcdserver/server.go:2440","msg":"cluster version is updated","cluster-version":"3.6"}
	{"level":"info","ts":"2025-12-17T20:01:44.702266Z","caller":"version/monitor.go:116","msg":"cluster version differs from storage version.","cluster-version":"3.6.0","storage-version":"3.5.0"}
	{"level":"info","ts":"2025-12-17T20:01:44.702744Z","caller":"schema/migration.go:65","msg":"updated storage version","new-storage-version":"3.6.0"}
	{"level":"info","ts":"2025-12-17T20:01:44.703167Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-17T20:01:44.703300Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-17T20:01:44.706394Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-12-17T20:01:44.706418Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.103.2:2379"}
	
	
	==> kernel <==
	 20:01:56 up  1:44,  0 user,  load average: 4.17, 3.40, 2.41
	Linux newest-cni-420762 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [34f5b45b11725f33ef6b724bae1bbbe008974735b2a28391e55d594e555c18f3] <==
	I1217 20:01:56.062120       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1217 20:01:56.062447       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1217 20:01:56.062752       1 main.go:148] setting mtu 1500 for CNI 
	I1217 20:01:56.062783       1 main.go:178] kindnetd IP family: "ipv4"
	I1217 20:01:56.062804       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-17T20:01:56Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1217 20:01:56.364177       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1217 20:01:56.364265       1 controller.go:381] "Waiting for informer caches to sync"
	I1217 20:01:56.364297       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1217 20:01:56.364437       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	
	
	==> kube-apiserver [e0bd9aa35512426b499253d5ff64b4eb9eaa54a2cdff1a461de6985859cdee92] <==
	I1217 20:01:45.781633       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1217 20:01:45.781656       1 default_servicecidr_controller.go:169] Creating default ServiceCIDR with CIDRs: [10.96.0.0/12]
	I1217 20:01:45.782781       1 controller.go:667] quota admission added evaluator for: namespaces
	E1217 20:01:45.783109       1 controller.go:201] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	I1217 20:01:45.789360       1 cidrallocator.go:302] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1217 20:01:45.790632       1 default_servicecidr_controller.go:231] Setting default ServiceCIDR condition Ready to True
	I1217 20:01:45.796612       1 cidrallocator.go:278] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1217 20:01:45.985979       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1217 20:01:46.685921       1 storage_scheduling.go:123] created PriorityClass system-node-critical with value 2000001000
	I1217 20:01:46.689882       1 storage_scheduling.go:123] created PriorityClass system-cluster-critical with value 2000000000
	I1217 20:01:46.689901       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I1217 20:01:47.203249       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1217 20:01:47.241223       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1217 20:01:47.290499       1 alloc.go:329] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1217 20:01:47.296302       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.103.2]
	I1217 20:01:47.297478       1 controller.go:667] quota admission added evaluator for: endpoints
	I1217 20:01:47.301323       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1217 20:01:47.713798       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1217 20:01:48.292008       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1217 20:01:48.301928       1 alloc.go:329] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1217 20:01:48.312212       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1217 20:01:53.516321       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1217 20:01:53.618153       1 cidrallocator.go:278] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1217 20:01:53.623890       1 cidrallocator.go:278] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1217 20:01:53.716311       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [271f1c60f35a9a38c707de11c75d6a00c18b117faadf14882d79f2d6a750364e] <==
	I1217 20:01:52.519877       1 shared_informer.go:377] "Caches are synced"
	I1217 20:01:52.520301       1 range_allocator.go:177] "Sending events to api server"
	I1217 20:01:52.520344       1 range_allocator.go:181] "Starting range CIDR allocator"
	I1217 20:01:52.519893       1 shared_informer.go:377] "Caches are synced"
	I1217 20:01:52.519893       1 shared_informer.go:377] "Caches are synced"
	I1217 20:01:52.519898       1 shared_informer.go:377] "Caches are synced"
	I1217 20:01:52.519901       1 shared_informer.go:377] "Caches are synced"
	I1217 20:01:52.519903       1 shared_informer.go:377] "Caches are synced"
	I1217 20:01:52.519907       1 shared_informer.go:377] "Caches are synced"
	I1217 20:01:52.519911       1 shared_informer.go:377] "Caches are synced"
	I1217 20:01:52.519917       1 shared_informer.go:377] "Caches are synced"
	I1217 20:01:52.519882       1 shared_informer.go:377] "Caches are synced"
	I1217 20:01:52.520052       1 shared_informer.go:377] "Caches are synced"
	I1217 20:01:52.520107       1 shared_informer.go:377] "Caches are synced"
	I1217 20:01:52.519905       1 shared_informer.go:377] "Caches are synced"
	I1217 20:01:52.520011       1 shared_informer.go:377] "Caches are synced"
	I1217 20:01:52.520359       1 shared_informer.go:370] "Waiting for caches to sync"
	I1217 20:01:52.520635       1 shared_informer.go:377] "Caches are synced"
	I1217 20:01:52.526520       1 shared_informer.go:377] "Caches are synced"
	I1217 20:01:52.531889       1 range_allocator.go:433] "Set node PodCIDR" node="newest-cni-420762" podCIDRs=["10.42.0.0/24"]
	I1217 20:01:52.532906       1 shared_informer.go:370] "Waiting for caches to sync"
	I1217 20:01:52.619573       1 shared_informer.go:377] "Caches are synced"
	I1217 20:01:52.619599       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1217 20:01:52.619606       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1217 20:01:52.633820       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kube-proxy [8b98d6d1ccaaa903e09a672515ee2e27ec712395095d9d00c9749723a1de1989] <==
	I1217 20:01:54.202669       1 server_linux.go:53] "Using iptables proxy"
	I1217 20:01:54.265971       1 shared_informer.go:370] "Waiting for caches to sync"
	I1217 20:01:54.366900       1 shared_informer.go:377] "Caches are synced"
	I1217 20:01:54.366975       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.103.2"]
	E1217 20:01:54.367417       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1217 20:01:54.396922       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1217 20:01:54.397030       1 server_linux.go:136] "Using iptables Proxier"
	I1217 20:01:54.405238       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1217 20:01:54.405796       1 server.go:529] "Version info" version="v1.35.0-rc.1"
	I1217 20:01:54.406732       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1217 20:01:54.416255       1 config.go:106] "Starting endpoint slice config controller"
	I1217 20:01:54.416279       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1217 20:01:54.416310       1 config.go:403] "Starting serviceCIDR config controller"
	I1217 20:01:54.416315       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1217 20:01:54.416695       1 config.go:200] "Starting service config controller"
	I1217 20:01:54.416746       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1217 20:01:54.417016       1 config.go:309] "Starting node config controller"
	I1217 20:01:54.417031       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1217 20:01:54.516845       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1217 20:01:54.516878       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1217 20:01:54.516921       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1217 20:01:54.517245       1 shared_informer.go:356] "Caches are synced" controller="node config"
	
	
	==> kube-scheduler [a7ea708908b2f11492773e86cd6b298a213e3d437878e2a301e2e080874a42a7] <==
	E1217 20:01:45.743200       1 reflector.go:204] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.VolumeAttachment"
	E1217 20:01:45.743448       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Pod"
	E1217 20:01:45.743464       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicationController"
	E1217 20:01:45.743651       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceClaim"
	E1217 20:01:45.743745       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StorageClass"
	E1217 20:01:45.744339       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIStorageCapacity"
	E1217 20:01:45.744408       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Service"
	E1217 20:01:45.744459       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSINode"
	E1217 20:01:45.744575       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolumeClaim"
	E1217 20:01:45.744609       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Node"
	E1217 20:01:45.744622       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Namespace"
	E1217 20:01:45.744635       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PodDisruptionBudget"
	E1217 20:01:45.744605       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicaSet"
	E1217 20:01:45.744714       1 reflector.go:204] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.DeviceClass"
	E1217 20:01:45.744728       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StatefulSet"
	E1217 20:01:46.590820       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceSlice"
	E1217 20:01:46.590820       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PodDisruptionBudget"
	E1217 20:01:46.608550       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Pod"
	E1217 20:01:46.676457       1 reflector.go:204] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.VolumeAttachment"
	E1217 20:01:46.721134       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolumeClaim"
	E1217 20:01:46.724463       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Node"
	E1217 20:01:46.735347       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceClaim"
	E1217 20:01:46.921603       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSINode"
	E1217 20:01:47.028112       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1693" type="*v1.ConfigMap"
	I1217 20:01:49.038571       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Dec 17 20:01:49 newest-cni-420762 kubelet[1286]: E1217 20:01:49.180260    1286 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-420762\" already exists" pod="kube-system/kube-apiserver-newest-cni-420762"
	Dec 17 20:01:49 newest-cni-420762 kubelet[1286]: E1217 20:01:49.180342    1286 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-newest-cni-420762" containerName="kube-apiserver"
	Dec 17 20:01:49 newest-cni-420762 kubelet[1286]: I1217 20:01:49.206482    1286 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/etcd-newest-cni-420762" podStartSLOduration=1.206465778 podStartE2EDuration="1.206465778s" podCreationTimestamp="2025-12-17 20:01:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-17 20:01:49.195289836 +0000 UTC m=+1.152236612" watchObservedRunningTime="2025-12-17 20:01:49.206465778 +0000 UTC m=+1.163412538"
	Dec 17 20:01:49 newest-cni-420762 kubelet[1286]: I1217 20:01:49.206633    1286 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-apiserver-newest-cni-420762" podStartSLOduration=1.206625118 podStartE2EDuration="1.206625118s" podCreationTimestamp="2025-12-17 20:01:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-17 20:01:49.206578102 +0000 UTC m=+1.163524881" watchObservedRunningTime="2025-12-17 20:01:49.206625118 +0000 UTC m=+1.163571896"
	Dec 17 20:01:49 newest-cni-420762 kubelet[1286]: I1217 20:01:49.223562    1286 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-scheduler-newest-cni-420762" podStartSLOduration=1.223540645 podStartE2EDuration="1.223540645s" podCreationTimestamp="2025-12-17 20:01:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-17 20:01:49.215359686 +0000 UTC m=+1.172306464" watchObservedRunningTime="2025-12-17 20:01:49.223540645 +0000 UTC m=+1.180487422"
	Dec 17 20:01:49 newest-cni-420762 kubelet[1286]: I1217 20:01:49.416777    1286 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-controller-manager-newest-cni-420762" podStartSLOduration=1.416764021 podStartE2EDuration="1.416764021s" podCreationTimestamp="2025-12-17 20:01:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-17 20:01:49.223787353 +0000 UTC m=+1.180734122" watchObservedRunningTime="2025-12-17 20:01:49.416764021 +0000 UTC m=+1.373710794"
	Dec 17 20:01:50 newest-cni-420762 kubelet[1286]: E1217 20:01:50.171141    1286 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-newest-cni-420762" containerName="kube-scheduler"
	Dec 17 20:01:50 newest-cni-420762 kubelet[1286]: E1217 20:01:50.171210    1286 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-newest-cni-420762" containerName="etcd"
	Dec 17 20:01:50 newest-cni-420762 kubelet[1286]: E1217 20:01:50.171306    1286 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-newest-cni-420762" containerName="kube-apiserver"
	Dec 17 20:01:51 newest-cni-420762 kubelet[1286]: E1217 20:01:51.173176    1286 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-newest-cni-420762" containerName="kube-scheduler"
	Dec 17 20:01:51 newest-cni-420762 kubelet[1286]: E1217 20:01:51.173286    1286 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-newest-cni-420762" containerName="etcd"
	Dec 17 20:01:52 newest-cni-420762 kubelet[1286]: E1217 20:01:52.230103    1286 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-newest-cni-420762" containerName="kube-apiserver"
	Dec 17 20:01:52 newest-cni-420762 kubelet[1286]: I1217 20:01:52.586046    1286 kuberuntime_manager.go:2062] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Dec 17 20:01:52 newest-cni-420762 kubelet[1286]: I1217 20:01:52.586832    1286 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Dec 17 20:01:52 newest-cni-420762 kubelet[1286]: E1217 20:01:52.636932    1286 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-newest-cni-420762" containerName="kube-controller-manager"
	Dec 17 20:01:53 newest-cni-420762 kubelet[1286]: I1217 20:01:53.869481    1286 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/1888eaab-a42f-4c23-87e4-6c698a41af87-cni-cfg\") pod \"kindnet-2f44p\" (UID: \"1888eaab-a42f-4c23-87e4-6c698a41af87\") " pod="kube-system/kindnet-2f44p"
	Dec 17 20:01:53 newest-cni-420762 kubelet[1286]: I1217 20:01:53.870518    1286 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1888eaab-a42f-4c23-87e4-6c698a41af87-lib-modules\") pod \"kindnet-2f44p\" (UID: \"1888eaab-a42f-4c23-87e4-6c698a41af87\") " pod="kube-system/kindnet-2f44p"
	Dec 17 20:01:53 newest-cni-420762 kubelet[1286]: I1217 20:01:53.870579    1286 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5bbdb455-62b1-48ac-a4d9-b930a3dc010f-xtables-lock\") pod \"kube-proxy-qpt8z\" (UID: \"5bbdb455-62b1-48ac-a4d9-b930a3dc010f\") " pod="kube-system/kube-proxy-qpt8z"
	Dec 17 20:01:53 newest-cni-420762 kubelet[1286]: I1217 20:01:53.870600    1286 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5bbdb455-62b1-48ac-a4d9-b930a3dc010f-lib-modules\") pod \"kube-proxy-qpt8z\" (UID: \"5bbdb455-62b1-48ac-a4d9-b930a3dc010f\") " pod="kube-system/kube-proxy-qpt8z"
	Dec 17 20:01:53 newest-cni-420762 kubelet[1286]: I1217 20:01:53.870631    1286 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/5bbdb455-62b1-48ac-a4d9-b930a3dc010f-kube-proxy\") pod \"kube-proxy-qpt8z\" (UID: \"5bbdb455-62b1-48ac-a4d9-b930a3dc010f\") " pod="kube-system/kube-proxy-qpt8z"
	Dec 17 20:01:53 newest-cni-420762 kubelet[1286]: I1217 20:01:53.870652    1286 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5p898\" (UniqueName: \"kubernetes.io/projected/5bbdb455-62b1-48ac-a4d9-b930a3dc010f-kube-api-access-5p898\") pod \"kube-proxy-qpt8z\" (UID: \"5bbdb455-62b1-48ac-a4d9-b930a3dc010f\") " pod="kube-system/kube-proxy-qpt8z"
	Dec 17 20:01:53 newest-cni-420762 kubelet[1286]: I1217 20:01:53.870680    1286 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1888eaab-a42f-4c23-87e4-6c698a41af87-xtables-lock\") pod \"kindnet-2f44p\" (UID: \"1888eaab-a42f-4c23-87e4-6c698a41af87\") " pod="kube-system/kindnet-2f44p"
	Dec 17 20:01:53 newest-cni-420762 kubelet[1286]: I1217 20:01:53.870706    1286 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kh5b9\" (UniqueName: \"kubernetes.io/projected/1888eaab-a42f-4c23-87e4-6c698a41af87-kube-api-access-kh5b9\") pod \"kindnet-2f44p\" (UID: \"1888eaab-a42f-4c23-87e4-6c698a41af87\") " pod="kube-system/kindnet-2f44p"
	Dec 17 20:01:56 newest-cni-420762 kubelet[1286]: I1217 20:01:56.214342    1286 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-proxy-qpt8z" podStartSLOduration=3.214319917 podStartE2EDuration="3.214319917s" podCreationTimestamp="2025-12-17 20:01:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-17 20:01:54.212676695 +0000 UTC m=+6.169623473" watchObservedRunningTime="2025-12-17 20:01:56.214319917 +0000 UTC m=+8.171266694"
	Dec 17 20:01:56 newest-cni-420762 kubelet[1286]: I1217 20:01:56.214511    1286 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kindnet-2f44p" podStartSLOduration=1.562451764 podStartE2EDuration="3.214501407s" podCreationTimestamp="2025-12-17 20:01:53 +0000 UTC" firstStartedPulling="2025-12-17 20:01:54.063918727 +0000 UTC m=+6.020865501" lastFinishedPulling="2025-12-17 20:01:55.715968374 +0000 UTC m=+7.672915144" observedRunningTime="2025-12-17 20:01:56.214435354 +0000 UTC m=+8.171382133" watchObservedRunningTime="2025-12-17 20:01:56.214501407 +0000 UTC m=+8.171448184"
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-420762 -n newest-cni-420762
helpers_test.go:270: (dbg) Run:  kubectl --context newest-cni-420762 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:281: non-running pods: coredns-7d764666f9-jsv2j storage-provisioner
helpers_test.go:283: ======> post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: describe non-running pods <======
helpers_test.go:286: (dbg) Run:  kubectl --context newest-cni-420762 describe pod coredns-7d764666f9-jsv2j storage-provisioner
helpers_test.go:286: (dbg) Non-zero exit: kubectl --context newest-cni-420762 describe pod coredns-7d764666f9-jsv2j storage-provisioner: exit status 1 (61.062383ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-7d764666f9-jsv2j" not found
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:288: kubectl --context newest-cni-420762 describe pod coredns-7d764666f9-jsv2j storage-provisioner: exit status 1
--- FAIL: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (2.57s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (2.76s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-147021 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-147021 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (291.526935ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T20:02:23Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-147021 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-147021 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context embed-certs-147021 describe deploy/metrics-server -n kube-system: exit status 1 (89.08493ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-147021 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect embed-certs-147021
helpers_test.go:244: (dbg) docker inspect embed-certs-147021:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "83dda83adbe19d01d49a5760f6d4c64b7758728b6bba04deace62e55f005deb8",
	        "Created": "2025-12-17T20:01:40.099829209Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 644864,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-17T20:01:40.139452349Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:e3abeb065413b7566dd42e98e204ab3ad174790743f1f5cd427036c11b49d7f1",
	        "ResolvConfPath": "/var/lib/docker/containers/83dda83adbe19d01d49a5760f6d4c64b7758728b6bba04deace62e55f005deb8/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/83dda83adbe19d01d49a5760f6d4c64b7758728b6bba04deace62e55f005deb8/hostname",
	        "HostsPath": "/var/lib/docker/containers/83dda83adbe19d01d49a5760f6d4c64b7758728b6bba04deace62e55f005deb8/hosts",
	        "LogPath": "/var/lib/docker/containers/83dda83adbe19d01d49a5760f6d4c64b7758728b6bba04deace62e55f005deb8/83dda83adbe19d01d49a5760f6d4c64b7758728b6bba04deace62e55f005deb8-json.log",
	        "Name": "/embed-certs-147021",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-147021:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "embed-certs-147021",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "83dda83adbe19d01d49a5760f6d4c64b7758728b6bba04deace62e55f005deb8",
	                "LowerDir": "/var/lib/docker/overlay2/a2bd0701b2a8182e8c812ff61b8a44b36e1fa0dbd92285a2851592ab9f71eb11-init/diff:/var/lib/docker/overlay2/29727d664a8119dcd8d22d923cfdfa7d86f99088879bf2a113d907b51116eb38/diff",
	                "MergedDir": "/var/lib/docker/overlay2/a2bd0701b2a8182e8c812ff61b8a44b36e1fa0dbd92285a2851592ab9f71eb11/merged",
	                "UpperDir": "/var/lib/docker/overlay2/a2bd0701b2a8182e8c812ff61b8a44b36e1fa0dbd92285a2851592ab9f71eb11/diff",
	                "WorkDir": "/var/lib/docker/overlay2/a2bd0701b2a8182e8c812ff61b8a44b36e1fa0dbd92285a2851592ab9f71eb11/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-147021",
	                "Source": "/var/lib/docker/volumes/embed-certs-147021/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-147021",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-147021",
	                "name.minikube.sigs.k8s.io": "embed-certs-147021",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "a050c6ed619f6bd333736b3f2ff3141535c5da347aa99580bd588fe16fd709c8",
	            "SandboxKey": "/var/run/docker/netns/a050c6ed619f",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33463"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33464"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33467"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33465"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33466"
	                    }
	                ]
	            },
	            "Networks": {
	                "embed-certs-147021": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "d0eb8a5e286382abd016e9750b18658c10571b76b24cafa91dc20ab0a3e49d6a",
	                    "EndpointID": "d73c47969198e547c9c6c7f2cc44c0524ffa0806456a0c897297869609c46209",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "MacAddress": "06:be:fc:71:37:81",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-147021",
	                        "83dda83adbe1"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-147021 -n embed-certs-147021
helpers_test.go:253: <<< TestStartStop/group/embed-certs/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-147021 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-147021 logs -n 25: (1.332990723s)
helpers_test.go:261: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────
────────────┐
	│ COMMAND │                                                                                                                        ARGS                                                                                                                        │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────
────────────┤
	│ start   │ -p no-preload-832842 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1                                                                                       │ no-preload-832842            │ jenkins │ v1.37.0 │ 17 Dec 25 20:00 UTC │ 17 Dec 25 20:01 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-894575 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ old-k8s-version-894575       │ jenkins │ v1.37.0 │ 17 Dec 25 20:00 UTC │ 17 Dec 25 20:00 UTC │
	│ start   │ -p old-k8s-version-894575 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0      │ old-k8s-version-894575       │ jenkins │ v1.37.0 │ 17 Dec 25 20:00 UTC │ 17 Dec 25 20:01 UTC │
	│ start   │ -p cert-expiration-059470 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                          │ cert-expiration-059470       │ jenkins │ v1.37.0 │ 17 Dec 25 20:00 UTC │ 17 Dec 25 20:00 UTC │
	│ delete  │ -p cert-expiration-059470                                                                                                                                                                                                                          │ cert-expiration-059470       │ jenkins │ v1.37.0 │ 17 Dec 25 20:00 UTC │ 17 Dec 25 20:00 UTC │
	│ start   │ -p default-k8s-diff-port-759234 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3                                                                           │ default-k8s-diff-port-759234 │ jenkins │ v1.37.0 │ 17 Dec 25 20:00 UTC │ 17 Dec 25 20:01 UTC │
	│ image   │ no-preload-832842 image list --format=json                                                                                                                                                                                                         │ no-preload-832842            │ jenkins │ v1.37.0 │ 17 Dec 25 20:01 UTC │ 17 Dec 25 20:01 UTC │
	│ pause   │ -p no-preload-832842 --alsologtostderr -v=1                                                                                                                                                                                                        │ no-preload-832842            │ jenkins │ v1.37.0 │ 17 Dec 25 20:01 UTC │                     │
	│ image   │ old-k8s-version-894575 image list --format=json                                                                                                                                                                                                    │ old-k8s-version-894575       │ jenkins │ v1.37.0 │ 17 Dec 25 20:01 UTC │ 17 Dec 25 20:01 UTC │
	│ pause   │ -p old-k8s-version-894575 --alsologtostderr -v=1                                                                                                                                                                                                   │ old-k8s-version-894575       │ jenkins │ v1.37.0 │ 17 Dec 25 20:01 UTC │                     │
	│ delete  │ -p no-preload-832842                                                                                                                                                                                                                               │ no-preload-832842            │ jenkins │ v1.37.0 │ 17 Dec 25 20:01 UTC │ 17 Dec 25 20:01 UTC │
	│ delete  │ -p old-k8s-version-894575                                                                                                                                                                                                                          │ old-k8s-version-894575       │ jenkins │ v1.37.0 │ 17 Dec 25 20:01 UTC │ 17 Dec 25 20:01 UTC │
	│ delete  │ -p no-preload-832842                                                                                                                                                                                                                               │ no-preload-832842            │ jenkins │ v1.37.0 │ 17 Dec 25 20:01 UTC │ 17 Dec 25 20:01 UTC │
	│ start   │ -p newest-cni-420762 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1 │ newest-cni-420762            │ jenkins │ v1.37.0 │ 17 Dec 25 20:01 UTC │ 17 Dec 25 20:01 UTC │
	│ delete  │ -p old-k8s-version-894575                                                                                                                                                                                                                          │ old-k8s-version-894575       │ jenkins │ v1.37.0 │ 17 Dec 25 20:01 UTC │ 17 Dec 25 20:01 UTC │
	│ start   │ -p embed-certs-147021 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3                                                                                             │ embed-certs-147021           │ jenkins │ v1.37.0 │ 17 Dec 25 20:01 UTC │ 17 Dec 25 20:02 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-759234 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                 │ default-k8s-diff-port-759234 │ jenkins │ v1.37.0 │ 17 Dec 25 20:01 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-759234 --alsologtostderr -v=3                                                                                                                                                                                             │ default-k8s-diff-port-759234 │ jenkins │ v1.37.0 │ 17 Dec 25 20:01 UTC │ 17 Dec 25 20:01 UTC │
	│ addons  │ enable metrics-server -p newest-cni-420762 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                            │ newest-cni-420762            │ jenkins │ v1.37.0 │ 17 Dec 25 20:01 UTC │                     │
	│ addons  │ enable dashboard -p default-k8s-diff-port-759234 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                            │ default-k8s-diff-port-759234 │ jenkins │ v1.37.0 │ 17 Dec 25 20:01 UTC │ 17 Dec 25 20:01 UTC │
	│ start   │ -p default-k8s-diff-port-759234 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3                                                                           │ default-k8s-diff-port-759234 │ jenkins │ v1.37.0 │ 17 Dec 25 20:01 UTC │                     │
	│ stop    │ -p newest-cni-420762 --alsologtostderr -v=3                                                                                                                                                                                                        │ newest-cni-420762            │ jenkins │ v1.37.0 │ 17 Dec 25 20:01 UTC │ 17 Dec 25 20:02 UTC │
	│ addons  │ enable dashboard -p newest-cni-420762 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                       │ newest-cni-420762            │ jenkins │ v1.37.0 │ 17 Dec 25 20:02 UTC │ 17 Dec 25 20:02 UTC │
	│ start   │ -p newest-cni-420762 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1 │ newest-cni-420762            │ jenkins │ v1.37.0 │ 17 Dec 25 20:02 UTC │                     │
	│ addons  │ enable metrics-server -p embed-certs-147021 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                           │ embed-certs-147021           │ jenkins │ v1.37.0 │ 17 Dec 25 20:02 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────
────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/17 20:02:16
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1217 20:02:16.235835  654009 out.go:360] Setting OutFile to fd 1 ...
	I1217 20:02:16.236102  654009 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 20:02:16.236115  654009 out.go:374] Setting ErrFile to fd 2...
	I1217 20:02:16.236122  654009 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 20:02:16.236324  654009 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22186-372245/.minikube/bin
	I1217 20:02:16.236802  654009 out.go:368] Setting JSON to false
	I1217 20:02:16.237990  654009 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":6287,"bootTime":1765995449,"procs":321,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1217 20:02:16.238047  654009 start.go:143] virtualization: kvm guest
	I1217 20:02:16.240089  654009 out.go:179] * [newest-cni-420762] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1217 20:02:16.241501  654009 notify.go:221] Checking for updates...
	I1217 20:02:16.241534  654009 out.go:179]   - MINIKUBE_LOCATION=22186
	I1217 20:02:16.243095  654009 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 20:02:16.244482  654009 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22186-372245/kubeconfig
	I1217 20:02:16.246032  654009 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22186-372245/.minikube
	I1217 20:02:16.250784  654009 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1217 20:02:16.252155  654009 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 20:02:16.253837  654009 config.go:182] Loaded profile config "newest-cni-420762": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1217 20:02:16.254438  654009 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 20:02:16.278749  654009 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1217 20:02:16.278862  654009 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 20:02:16.338477  654009 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:68 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-17 20:02:16.328530635 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1217 20:02:16.338579  654009 docker.go:319] overlay module found
	I1217 20:02:16.340501  654009 out.go:179] * Using the docker driver based on existing profile
	I1217 20:02:16.341955  654009 start.go:309] selected driver: docker
	I1217 20:02:16.341992  654009 start.go:927] validating driver "docker" against &{Name:newest-cni-420762 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-420762 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID
:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 20:02:16.342090  654009 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 20:02:16.342697  654009 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 20:02:16.404067  654009 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:68 OomKillDisable:false NGoroutines:75 SystemTime:2025-12-17 20:02:16.392580828 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1217 20:02:16.404442  654009 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1217 20:02:16.404491  654009 cni.go:84] Creating CNI manager for ""
	I1217 20:02:16.404569  654009 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1217 20:02:16.404707  654009 start.go:353] cluster config:
	{Name:newest-cni-420762 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-420762 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Moun
tUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 20:02:16.408089  654009 out.go:179] * Starting "newest-cni-420762" primary control-plane node in "newest-cni-420762" cluster
	I1217 20:02:16.409295  654009 cache.go:134] Beginning downloading kic base image for docker with crio
	I1217 20:02:16.410678  654009 out.go:179] * Pulling base image v0.0.48-1765966054-22186 ...
	I1217 20:02:16.411890  654009 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime crio
	I1217 20:02:16.411938  654009 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22186-372245/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-cri-o-overlay-amd64.tar.lz4
	I1217 20:02:16.411963  654009 cache.go:65] Caching tarball of preloaded images
	I1217 20:02:16.411977  654009 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 in local docker daemon
	I1217 20:02:16.412131  654009 preload.go:238] Found /home/jenkins/minikube-integration/22186-372245/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1217 20:02:16.412146  654009 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-rc.1 on crio
	I1217 20:02:16.412276  654009 profile.go:143] Saving config to /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/newest-cni-420762/config.json ...
	I1217 20:02:16.435435  654009 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 in local docker daemon, skipping pull
	I1217 20:02:16.435461  654009 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 exists in daemon, skipping load
	I1217 20:02:16.435478  654009 cache.go:243] Successfully downloaded all kic artifacts
	I1217 20:02:16.435505  654009 start.go:360] acquireMachinesLock for newest-cni-420762: {Name:mkcbfa827dcff20bdb15f1c7ce9c4c626112788f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 20:02:16.435570  654009 start.go:364] duration metric: took 41.573µs to acquireMachinesLock for "newest-cni-420762"
	I1217 20:02:16.435592  654009 start.go:96] Skipping create...Using existing machine configuration
	I1217 20:02:16.435597  654009 fix.go:54] fixHost starting: 
	I1217 20:02:16.435848  654009 cli_runner.go:164] Run: docker container inspect newest-cni-420762 --format={{.State.Status}}
	I1217 20:02:16.455107  654009 fix.go:112] recreateIfNeeded on newest-cni-420762: state=Stopped err=<nil>
	W1217 20:02:16.455153  654009 fix.go:138] unexpected machine state, will restart: <nil>
	I1217 20:02:13.186795  596882 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1217 20:02:13.187326  596882 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 20:02:13.187390  596882 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 20:02:13.187458  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 20:02:13.222807  596882 cri.go:89] found id: "dfcf129a23a9b4b8338549662d03dc9674e70494089b9acbd56ee28dd0e59a2e"
	I1217 20:02:13.222833  596882 cri.go:89] found id: ""
	I1217 20:02:13.222841  596882 logs.go:282] 1 containers: [dfcf129a23a9b4b8338549662d03dc9674e70494089b9acbd56ee28dd0e59a2e]
	I1217 20:02:13.222924  596882 ssh_runner.go:195] Run: which crictl
	I1217 20:02:13.227522  596882 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 20:02:13.227603  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 20:02:13.256749  596882 cri.go:89] found id: ""
	I1217 20:02:13.256777  596882 logs.go:282] 0 containers: []
	W1217 20:02:13.256789  596882 logs.go:284] No container was found matching "etcd"
	I1217 20:02:13.256796  596882 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 20:02:13.256848  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 20:02:13.290140  596882 cri.go:89] found id: ""
	I1217 20:02:13.290172  596882 logs.go:282] 0 containers: []
	W1217 20:02:13.290185  596882 logs.go:284] No container was found matching "coredns"
	I1217 20:02:13.290200  596882 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 20:02:13.290261  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 20:02:13.324939  596882 cri.go:89] found id: "26afbca819064c614a7c269e4fbe3f73beb12920c9989c7a9adca8a87b8aee29"
	I1217 20:02:13.324962  596882 cri.go:89] found id: ""
	I1217 20:02:13.324973  596882 logs.go:282] 1 containers: [26afbca819064c614a7c269e4fbe3f73beb12920c9989c7a9adca8a87b8aee29]
	I1217 20:02:13.325035  596882 ssh_runner.go:195] Run: which crictl
	I1217 20:02:13.329961  596882 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 20:02:13.330042  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 20:02:13.362577  596882 cri.go:89] found id: ""
	I1217 20:02:13.362607  596882 logs.go:282] 0 containers: []
	W1217 20:02:13.362620  596882 logs.go:284] No container was found matching "kube-proxy"
	I1217 20:02:13.362628  596882 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 20:02:13.362691  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 20:02:13.406292  596882 cri.go:89] found id: "711081a1b65cc9754b1a9b8fd19fce7769b6a8e65b097e062aa1703f24e1a476"
	I1217 20:02:13.406318  596882 cri.go:89] found id: ""
	I1217 20:02:13.406328  596882 logs.go:282] 1 containers: [711081a1b65cc9754b1a9b8fd19fce7769b6a8e65b097e062aa1703f24e1a476]
	I1217 20:02:13.406388  596882 ssh_runner.go:195] Run: which crictl
	I1217 20:02:13.411810  596882 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 20:02:13.411895  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 20:02:13.449033  596882 cri.go:89] found id: ""
	I1217 20:02:13.449071  596882 logs.go:282] 0 containers: []
	W1217 20:02:13.449112  596882 logs.go:284] No container was found matching "kindnet"
	I1217 20:02:13.449121  596882 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1217 20:02:13.449192  596882 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 20:02:13.480335  596882 cri.go:89] found id: ""
	I1217 20:02:13.480361  596882 logs.go:282] 0 containers: []
	W1217 20:02:13.480369  596882 logs.go:284] No container was found matching "storage-provisioner"
	I1217 20:02:13.480379  596882 logs.go:123] Gathering logs for describe nodes ...
	I1217 20:02:13.480395  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 20:02:13.556060  596882 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 20:02:13.556098  596882 logs.go:123] Gathering logs for kube-apiserver [dfcf129a23a9b4b8338549662d03dc9674e70494089b9acbd56ee28dd0e59a2e] ...
	I1217 20:02:13.556115  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 dfcf129a23a9b4b8338549662d03dc9674e70494089b9acbd56ee28dd0e59a2e"
	I1217 20:02:13.597791  596882 logs.go:123] Gathering logs for kube-scheduler [26afbca819064c614a7c269e4fbe3f73beb12920c9989c7a9adca8a87b8aee29] ...
	I1217 20:02:13.597833  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 26afbca819064c614a7c269e4fbe3f73beb12920c9989c7a9adca8a87b8aee29"
	I1217 20:02:13.635229  596882 logs.go:123] Gathering logs for kube-controller-manager [711081a1b65cc9754b1a9b8fd19fce7769b6a8e65b097e062aa1703f24e1a476] ...
	I1217 20:02:13.635259  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 711081a1b65cc9754b1a9b8fd19fce7769b6a8e65b097e062aa1703f24e1a476"
	I1217 20:02:13.665326  596882 logs.go:123] Gathering logs for CRI-O ...
	I1217 20:02:13.665360  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1217 20:02:13.740338  596882 logs.go:123] Gathering logs for container status ...
	I1217 20:02:13.740381  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 20:02:13.790187  596882 logs.go:123] Gathering logs for kubelet ...
	I1217 20:02:13.790232  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 20:02:13.931181  596882 logs.go:123] Gathering logs for dmesg ...
	I1217 20:02:13.931242  596882 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 20:02:16.457731  596882 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1217 20:02:16.458198  596882 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1217 20:02:16.458266  596882 kubeadm.go:602] duration metric: took 4m2.892589149s to restartPrimaryControlPlane
	W1217 20:02:16.458320  596882 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1217 20:02:16.458374  596882 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1217 20:02:17.054277  596882 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 20:02:17.068466  596882 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1217 20:02:17.077396  596882 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1217 20:02:17.077475  596882 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1217 20:02:17.085775  596882 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1217 20:02:17.085801  596882 kubeadm.go:158] found existing configuration files:
	
	I1217 20:02:17.085845  596882 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1217 20:02:17.093988  596882 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1217 20:02:17.094043  596882 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1217 20:02:17.101740  596882 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1217 20:02:17.109643  596882 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1217 20:02:17.109702  596882 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1217 20:02:17.117651  596882 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1217 20:02:17.125855  596882 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1217 20:02:17.125920  596882 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1217 20:02:17.133849  596882 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1217 20:02:17.141960  596882 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1217 20:02:17.142023  596882 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1217 20:02:17.149671  596882 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1217 20:02:17.187774  596882 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-rc.1
	I1217 20:02:17.187841  596882 kubeadm.go:319] [preflight] Running pre-flight checks
	I1217 20:02:17.253157  596882 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1217 20:02:17.253251  596882 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1045-gcp
	I1217 20:02:17.253312  596882 kubeadm.go:319] OS: Linux
	I1217 20:02:17.253363  596882 kubeadm.go:319] CGROUPS_CPU: enabled
	I1217 20:02:17.253404  596882 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1217 20:02:17.253472  596882 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1217 20:02:17.253545  596882 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1217 20:02:17.253611  596882 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1217 20:02:17.253683  596882 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1217 20:02:17.253727  596882 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1217 20:02:17.253779  596882 kubeadm.go:319] CGROUPS_IO: enabled
	I1217 20:02:17.310970  596882 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1217 20:02:17.311169  596882 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1217 20:02:17.311338  596882 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1217 20:02:17.320164  596882 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1217 20:02:17.322102  596882 out.go:252]   - Generating certificates and keys ...
	I1217 20:02:17.322221  596882 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1217 20:02:17.322330  596882 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1217 20:02:17.322462  596882 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1217 20:02:17.322564  596882 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1217 20:02:17.322630  596882 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1217 20:02:17.322677  596882 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1217 20:02:17.322784  596882 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1217 20:02:17.322882  596882 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1217 20:02:17.322991  596882 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1217 20:02:17.323111  596882 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1217 20:02:17.323168  596882 kubeadm.go:319] [certs] Using the existing "sa" key
	I1217 20:02:17.323294  596882 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1217 20:02:17.425898  596882 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1217 20:02:17.492290  596882 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1217 20:02:17.531679  596882 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1217 20:02:17.634137  596882 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1217 20:02:17.732785  596882 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1217 20:02:17.733321  596882 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1217 20:02:17.735913  596882 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	W1217 20:02:17.322305  649079 pod_ready.go:104] pod "coredns-66bc5c9577-lv4jd" is not "Ready", error: <nil>
	W1217 20:02:19.820545  649079 pod_ready.go:104] pod "coredns-66bc5c9577-lv4jd" is not "Ready", error: <nil>
	I1217 20:02:16.457053  654009 out.go:252] * Restarting existing docker container for "newest-cni-420762" ...
	I1217 20:02:16.457175  654009 cli_runner.go:164] Run: docker start newest-cni-420762
	I1217 20:02:16.714422  654009 cli_runner.go:164] Run: docker container inspect newest-cni-420762 --format={{.State.Status}}
	I1217 20:02:16.733315  654009 kic.go:430] container "newest-cni-420762" state is running.
	I1217 20:02:16.733708  654009 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-420762
	I1217 20:02:16.753524  654009 profile.go:143] Saving config to /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/newest-cni-420762/config.json ...
	I1217 20:02:16.753787  654009 machine.go:94] provisionDockerMachine start ...
	I1217 20:02:16.753866  654009 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-420762
	I1217 20:02:16.774636  654009 main.go:143] libmachine: Using SSH client type: native
	I1217 20:02:16.774917  654009 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33473 <nil> <nil>}
	I1217 20:02:16.774930  654009 main.go:143] libmachine: About to run SSH command:
	hostname
	I1217 20:02:16.775618  654009 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:36628->127.0.0.1:33473: read: connection reset by peer
	I1217 20:02:19.931928  654009 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-420762
	
	I1217 20:02:19.931962  654009 ubuntu.go:182] provisioning hostname "newest-cni-420762"
	I1217 20:02:19.932033  654009 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-420762
	I1217 20:02:19.953834  654009 main.go:143] libmachine: Using SSH client type: native
	I1217 20:02:19.954167  654009 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33473 <nil> <nil>}
	I1217 20:02:19.954187  654009 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-420762 && echo "newest-cni-420762" | sudo tee /etc/hostname
	I1217 20:02:20.116013  654009 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-420762
	
	I1217 20:02:20.116121  654009 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-420762
	I1217 20:02:20.140408  654009 main.go:143] libmachine: Using SSH client type: native
	I1217 20:02:20.140718  654009 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33473 <nil> <nil>}
	I1217 20:02:20.140753  654009 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-420762' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-420762/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-420762' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1217 20:02:20.294887  654009 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1217 20:02:20.294933  654009 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22186-372245/.minikube CaCertPath:/home/jenkins/minikube-integration/22186-372245/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22186-372245/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22186-372245/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22186-372245/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22186-372245/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22186-372245/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22186-372245/.minikube}
	I1217 20:02:20.294989  654009 ubuntu.go:190] setting up certificates
	I1217 20:02:20.295006  654009 provision.go:84] configureAuth start
	I1217 20:02:20.295061  654009 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-420762
	I1217 20:02:20.313614  654009 provision.go:143] copyHostCerts
	I1217 20:02:20.313707  654009 exec_runner.go:144] found /home/jenkins/minikube-integration/22186-372245/.minikube/ca.pem, removing ...
	I1217 20:02:20.313738  654009 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22186-372245/.minikube/ca.pem
	I1217 20:02:20.313846  654009 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22186-372245/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22186-372245/.minikube/ca.pem (1082 bytes)
	I1217 20:02:20.314034  654009 exec_runner.go:144] found /home/jenkins/minikube-integration/22186-372245/.minikube/cert.pem, removing ...
	I1217 20:02:20.314053  654009 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22186-372245/.minikube/cert.pem
	I1217 20:02:20.314110  654009 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22186-372245/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22186-372245/.minikube/cert.pem (1123 bytes)
	I1217 20:02:20.314208  654009 exec_runner.go:144] found /home/jenkins/minikube-integration/22186-372245/.minikube/key.pem, removing ...
	I1217 20:02:20.314218  654009 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22186-372245/.minikube/key.pem
	I1217 20:02:20.314256  654009 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22186-372245/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22186-372245/.minikube/key.pem (1675 bytes)
	I1217 20:02:20.314355  654009 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22186-372245/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22186-372245/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22186-372245/.minikube/certs/ca-key.pem org=jenkins.newest-cni-420762 san=[127.0.0.1 192.168.103.2 localhost minikube newest-cni-420762]
	I1217 20:02:20.413232  654009 provision.go:177] copyRemoteCerts
	I1217 20:02:20.413316  654009 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1217 20:02:20.413363  654009 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-420762
	I1217 20:02:20.443676  654009 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33473 SSHKeyPath:/home/jenkins/minikube-integration/22186-372245/.minikube/machines/newest-cni-420762/id_rsa Username:docker}
	I1217 20:02:20.567405  654009 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1217 20:02:20.591162  654009 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1217 20:02:20.610858  654009 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1217 20:02:20.630512  654009 provision.go:87] duration metric: took 335.491936ms to configureAuth
	I1217 20:02:20.630545  654009 ubuntu.go:206] setting minikube options for container-runtime
	I1217 20:02:20.630856  654009 config.go:182] Loaded profile config "newest-cni-420762": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1217 20:02:20.631017  654009 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-420762
	I1217 20:02:20.649905  654009 main.go:143] libmachine: Using SSH client type: native
	I1217 20:02:20.650173  654009 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33473 <nil> <nil>}
	I1217 20:02:20.650208  654009 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1217 20:02:20.984403  654009 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1217 20:02:20.984428  654009 machine.go:97] duration metric: took 4.230627108s to provisionDockerMachine
	I1217 20:02:20.984439  654009 start.go:293] postStartSetup for "newest-cni-420762" (driver="docker")
	I1217 20:02:20.984450  654009 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1217 20:02:20.984506  654009 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1217 20:02:20.984542  654009 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-420762
	I1217 20:02:21.002330  654009 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33473 SSHKeyPath:/home/jenkins/minikube-integration/22186-372245/.minikube/machines/newest-cni-420762/id_rsa Username:docker}
	I1217 20:02:21.105206  654009 ssh_runner.go:195] Run: cat /etc/os-release
	I1217 20:02:21.109131  654009 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1217 20:02:21.109165  654009 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1217 20:02:21.109177  654009 filesync.go:126] Scanning /home/jenkins/minikube-integration/22186-372245/.minikube/addons for local assets ...
	I1217 20:02:21.109239  654009 filesync.go:126] Scanning /home/jenkins/minikube-integration/22186-372245/.minikube/files for local assets ...
	I1217 20:02:21.109349  654009 filesync.go:149] local asset: /home/jenkins/minikube-integration/22186-372245/.minikube/files/etc/ssl/certs/3757972.pem -> 3757972.pem in /etc/ssl/certs
	I1217 20:02:21.109489  654009 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1217 20:02:21.117984  654009 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/files/etc/ssl/certs/3757972.pem --> /etc/ssl/certs/3757972.pem (1708 bytes)
	I1217 20:02:21.137274  654009 start.go:296] duration metric: took 152.816714ms for postStartSetup
	I1217 20:02:21.137351  654009 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 20:02:21.137386  654009 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-420762
	I1217 20:02:21.156427  654009 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33473 SSHKeyPath:/home/jenkins/minikube-integration/22186-372245/.minikube/machines/newest-cni-420762/id_rsa Username:docker}
	I1217 20:02:17.737722  596882 out.go:252]   - Booting up control plane ...
	I1217 20:02:17.737841  596882 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1217 20:02:17.737971  596882 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1217 20:02:17.738694  596882 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1217 20:02:17.754206  596882 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1217 20:02:17.754380  596882 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1217 20:02:17.763307  596882 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1217 20:02:17.764094  596882 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1217 20:02:17.764171  596882 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1217 20:02:17.882160  596882 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1217 20:02:17.882289  596882 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1217 20:02:18.384020  596882 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 501.988354ms
	I1217 20:02:18.387002  596882 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1217 20:02:18.387146  596882 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1217 20:02:18.387264  596882 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1217 20:02:18.387348  596882 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1217 20:02:18.892109  596882 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 504.971747ms
	I1217 20:02:20.593754  596882 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.206732225s
	I1217 20:02:22.388825  596882 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.0016659s
	I1217 20:02:22.404836  596882 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1217 20:02:22.414859  596882 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1217 20:02:22.428633  596882 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1217 20:02:22.428985  596882 kubeadm.go:319] [mark-control-plane] Marking the node kubernetes-upgrade-322567 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1217 20:02:22.441030  596882 kubeadm.go:319] [bootstrap-token] Using token: 828vip.xatk0rf1gw8kthwu
	I1217 20:02:21.257326  654009 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1217 20:02:21.262769  654009 fix.go:56] duration metric: took 4.827162824s for fixHost
	I1217 20:02:21.262804  654009 start.go:83] releasing machines lock for "newest-cni-420762", held for 4.827220427s
	I1217 20:02:21.262927  654009 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-420762
	I1217 20:02:21.281002  654009 ssh_runner.go:195] Run: cat /version.json
	I1217 20:02:21.281054  654009 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-420762
	I1217 20:02:21.281098  654009 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1217 20:02:21.281232  654009 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-420762
	I1217 20:02:21.300024  654009 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33473 SSHKeyPath:/home/jenkins/minikube-integration/22186-372245/.minikube/machines/newest-cni-420762/id_rsa Username:docker}
	I1217 20:02:21.300580  654009 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33473 SSHKeyPath:/home/jenkins/minikube-integration/22186-372245/.minikube/machines/newest-cni-420762/id_rsa Username:docker}
	I1217 20:02:21.462266  654009 ssh_runner.go:195] Run: systemctl --version
	I1217 20:02:21.469247  654009 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1217 20:02:21.505309  654009 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1217 20:02:21.510808  654009 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1217 20:02:21.510895  654009 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1217 20:02:21.520172  654009 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1217 20:02:21.520206  654009 start.go:496] detecting cgroup driver to use...
	I1217 20:02:21.520240  654009 detect.go:190] detected "systemd" cgroup driver on host os
	I1217 20:02:21.520296  654009 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1217 20:02:21.539242  654009 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1217 20:02:21.554989  654009 docker.go:218] disabling cri-docker service (if available) ...
	I1217 20:02:21.555057  654009 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1217 20:02:21.574424  654009 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1217 20:02:21.589740  654009 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1217 20:02:21.687686  654009 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1217 20:02:21.795601  654009 docker.go:234] disabling docker service ...
	I1217 20:02:21.795680  654009 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1217 20:02:21.813809  654009 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1217 20:02:21.830541  654009 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1217 20:02:21.933153  654009 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1217 20:02:22.035471  654009 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1217 20:02:22.051656  654009 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1217 20:02:22.067963  654009 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1217 20:02:22.068031  654009 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:02:22.080001  654009 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1217 20:02:22.080121  654009 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:02:22.091772  654009 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:02:22.103417  654009 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:02:22.114715  654009 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1217 20:02:22.125233  654009 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:02:22.136564  654009 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:02:22.146928  654009 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:02:22.157947  654009 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1217 20:02:22.166329  654009 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1217 20:02:22.174228  654009 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 20:02:22.262966  654009 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1217 20:02:22.407792  654009 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1217 20:02:22.407862  654009 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1217 20:02:22.412697  654009 start.go:564] Will wait 60s for crictl version
	I1217 20:02:22.412763  654009 ssh_runner.go:195] Run: which crictl
	I1217 20:02:22.417898  654009 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1217 20:02:22.451461  654009 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1217 20:02:22.451558  654009 ssh_runner.go:195] Run: crio --version
	I1217 20:02:22.482516  654009 ssh_runner.go:195] Run: crio --version
	I1217 20:02:22.513899  654009 out.go:179] * Preparing Kubernetes v1.35.0-rc.1 on CRI-O 1.34.3 ...
	I1217 20:02:22.515137  654009 cli_runner.go:164] Run: docker network inspect newest-cni-420762 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1217 20:02:22.533929  654009 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1217 20:02:22.538480  654009 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 20:02:22.550676  654009 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1217 20:02:22.442536  596882 out.go:252]   - Configuring RBAC rules ...
	I1217 20:02:22.442701  596882 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1217 20:02:22.448056  596882 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1217 20:02:22.454498  596882 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1217 20:02:22.457344  596882 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1217 20:02:22.460283  596882 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1217 20:02:22.462747  596882 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1217 20:02:22.796118  596882 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1217 20:02:23.218036  596882 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1217 20:02:23.795423  596882 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1217 20:02:23.796769  596882 kubeadm.go:319] 
	I1217 20:02:23.796865  596882 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1217 20:02:23.796872  596882 kubeadm.go:319] 
	I1217 20:02:23.796973  596882 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1217 20:02:23.796979  596882 kubeadm.go:319] 
	I1217 20:02:23.797023  596882 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1217 20:02:23.797127  596882 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1217 20:02:23.797194  596882 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1217 20:02:23.797207  596882 kubeadm.go:319] 
	I1217 20:02:23.797277  596882 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1217 20:02:23.797283  596882 kubeadm.go:319] 
	I1217 20:02:23.797345  596882 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1217 20:02:23.797351  596882 kubeadm.go:319] 
	I1217 20:02:23.797420  596882 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1217 20:02:23.797517  596882 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1217 20:02:23.797610  596882 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1217 20:02:23.797616  596882 kubeadm.go:319] 
	I1217 20:02:23.797730  596882 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1217 20:02:23.797834  596882 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1217 20:02:23.797839  596882 kubeadm.go:319] 
	I1217 20:02:23.797938  596882 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 828vip.xatk0rf1gw8kthwu \
	I1217 20:02:23.798071  596882 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:8ef867ecc15c7bd9eb9f87ba84e4b5e1f9c90bbe1fbebab60bd7b5b08cd9129f \
	I1217 20:02:23.798113  596882 kubeadm.go:319] 	--control-plane 
	I1217 20:02:23.798120  596882 kubeadm.go:319] 
	I1217 20:02:23.798222  596882 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1217 20:02:23.798227  596882 kubeadm.go:319] 
	I1217 20:02:23.798314  596882 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 828vip.xatk0rf1gw8kthwu \
	I1217 20:02:23.798422  596882 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:8ef867ecc15c7bd9eb9f87ba84e4b5e1f9c90bbe1fbebab60bd7b5b08cd9129f 
	I1217 20:02:23.800813  596882 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1045-gcp\n", err: exit status 1
	I1217 20:02:23.800988  596882 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1217 20:02:23.801044  596882 cni.go:84] Creating CNI manager for ""
	I1217 20:02:23.801058  596882 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1217 20:02:23.803901  596882 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1217 20:02:22.551836  654009 kubeadm.go:884] updating cluster {Name:newest-cni-420762 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-420762 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP:
MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1217 20:02:22.552041  654009 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime crio
	I1217 20:02:22.552129  654009 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 20:02:22.589899  654009 crio.go:514] all images are preloaded for cri-o runtime.
	I1217 20:02:22.589930  654009 crio.go:433] Images already preloaded, skipping extraction
	I1217 20:02:22.590007  654009 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 20:02:22.621804  654009 crio.go:514] all images are preloaded for cri-o runtime.
	I1217 20:02:22.621831  654009 cache_images.go:86] Images are preloaded, skipping loading
	I1217 20:02:22.621841  654009 kubeadm.go:935] updating node { 192.168.103.2 8443 v1.35.0-rc.1 crio true true} ...
	I1217 20:02:22.621986  654009 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-rc.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-420762 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-420762 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1217 20:02:22.622064  654009 ssh_runner.go:195] Run: crio config
	I1217 20:02:22.674532  654009 cni.go:84] Creating CNI manager for ""
	I1217 20:02:22.674554  654009 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1217 20:02:22.674568  654009 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1217 20:02:22.674592  654009 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.35.0-rc.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-420762 NodeName:newest-cni-420762 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1217 20:02:22.674720  654009 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-420762"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-rc.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1217 20:02:22.674788  654009 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-rc.1
	I1217 20:02:22.683449  654009 binaries.go:51] Found k8s binaries, skipping transfer
	I1217 20:02:22.683514  654009 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1217 20:02:22.691847  654009 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (373 bytes)
	I1217 20:02:22.705533  654009 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1217 20:02:22.719779  654009 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2219 bytes)
	I1217 20:02:22.733791  654009 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1217 20:02:22.737742  654009 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 20:02:22.748284  654009 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 20:02:22.844276  654009 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 20:02:22.871294  654009 certs.go:69] Setting up /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/newest-cni-420762 for IP: 192.168.103.2
	I1217 20:02:22.871319  654009 certs.go:195] generating shared ca certs ...
	I1217 20:02:22.871341  654009 certs.go:227] acquiring lock for ca certs: {Name:mk6c0a4a99609de13fb0b54aca94f9165cc7856c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 20:02:22.871522  654009 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22186-372245/.minikube/ca.key
	I1217 20:02:22.871580  654009 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22186-372245/.minikube/proxy-client-ca.key
	I1217 20:02:22.871593  654009 certs.go:257] generating profile certs ...
	I1217 20:02:22.871698  654009 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/newest-cni-420762/client.key
	I1217 20:02:22.871772  654009 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/newest-cni-420762/apiserver.key.c28860c5
	I1217 20:02:22.871889  654009 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/newest-cni-420762/proxy-client.key
	I1217 20:02:22.872058  654009 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-372245/.minikube/certs/375797.pem (1338 bytes)
	W1217 20:02:22.872121  654009 certs.go:480] ignoring /home/jenkins/minikube-integration/22186-372245/.minikube/certs/375797_empty.pem, impossibly tiny 0 bytes
	I1217 20:02:22.872136  654009 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-372245/.minikube/certs/ca-key.pem (1675 bytes)
	I1217 20:02:22.872175  654009 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-372245/.minikube/certs/ca.pem (1082 bytes)
	I1217 20:02:22.872210  654009 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-372245/.minikube/certs/cert.pem (1123 bytes)
	I1217 20:02:22.872245  654009 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-372245/.minikube/certs/key.pem (1675 bytes)
	I1217 20:02:22.872310  654009 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-372245/.minikube/files/etc/ssl/certs/3757972.pem (1708 bytes)
	I1217 20:02:22.873364  654009 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1217 20:02:22.896659  654009 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1217 20:02:22.919843  654009 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1217 20:02:22.942945  654009 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1217 20:02:22.968089  654009 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/newest-cni-420762/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1217 20:02:22.994700  654009 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/newest-cni-420762/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1217 20:02:23.017426  654009 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/newest-cni-420762/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1217 20:02:23.043145  654009 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/newest-cni-420762/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1217 20:02:23.072504  654009 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 20:02:23.097436  654009 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/certs/375797.pem --> /usr/share/ca-certificates/375797.pem (1338 bytes)
	I1217 20:02:23.119632  654009 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/files/etc/ssl/certs/3757972.pem --> /usr/share/ca-certificates/3757972.pem (1708 bytes)
	I1217 20:02:23.141944  654009 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1217 20:02:23.157733  654009 ssh_runner.go:195] Run: openssl version
	I1217 20:02:23.165884  654009 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:02:23.176762  654009 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1217 20:02:23.188121  654009 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:02:23.193772  654009 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 19:24 /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:02:23.193840  654009 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:02:23.244263  654009 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1217 20:02:23.253317  654009 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/375797.pem
	I1217 20:02:23.262966  654009 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/375797.pem /etc/ssl/certs/375797.pem
	I1217 20:02:23.273014  654009 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/375797.pem
	I1217 20:02:23.278302  654009 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 17 19:32 /usr/share/ca-certificates/375797.pem
	I1217 20:02:23.278364  654009 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/375797.pem
	I1217 20:02:23.317714  654009 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1217 20:02:23.327029  654009 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3757972.pem
	I1217 20:02:23.335333  654009 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3757972.pem /etc/ssl/certs/3757972.pem
	I1217 20:02:23.343305  654009 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3757972.pem
	I1217 20:02:23.347466  654009 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 17 19:32 /usr/share/ca-certificates/3757972.pem
	I1217 20:02:23.347532  654009 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3757972.pem
	I1217 20:02:23.384136  654009 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1217 20:02:23.393397  654009 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1217 20:02:23.398439  654009 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1217 20:02:23.451170  654009 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1217 20:02:23.504889  654009 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1217 20:02:23.564564  654009 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1217 20:02:23.623886  654009 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1217 20:02:23.665549  654009 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1217 20:02:23.720419  654009 kubeadm.go:401] StartCluster: {Name:newest-cni-420762 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-420762 Namespace:default APIServerHAVIP: APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: Mou
ntMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 20:02:23.720520  654009 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1217 20:02:23.720585  654009 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 20:02:23.754480  654009 cri.go:89] found id: "7d90e89ed2e6c5e28181da0ddfeb35b77f0b1a43e095576732addaa43e6437ba"
	I1217 20:02:23.754508  654009 cri.go:89] found id: "8544c715ea46d06c40c805d6d1253f17f885eca03855c5b880ed720d0fff20f4"
	I1217 20:02:23.754514  654009 cri.go:89] found id: "64b8df55df5230a0b1d5727316ee323fddc47f3997c667cf27faf9dbec35288f"
	I1217 20:02:23.754518  654009 cri.go:89] found id: "b7259506a4e5b6bee4d005c6c0116262f2d16fb84d5378bc6f468fae3b7d2570"
	I1217 20:02:23.754522  654009 cri.go:89] found id: ""
	I1217 20:02:23.754569  654009 ssh_runner.go:195] Run: sudo runc list -f json
	W1217 20:02:23.768713  654009 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T20:02:23Z" level=error msg="open /run/runc: no such file or directory"
	I1217 20:02:23.768792  654009 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1217 20:02:23.779380  654009 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1217 20:02:23.779405  654009 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1217 20:02:23.779460  654009 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1217 20:02:23.788829  654009 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1217 20:02:23.789908  654009 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-420762" does not appear in /home/jenkins/minikube-integration/22186-372245/kubeconfig
	I1217 20:02:23.790725  654009 kubeconfig.go:62] /home/jenkins/minikube-integration/22186-372245/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-420762" cluster setting kubeconfig missing "newest-cni-420762" context setting]
	I1217 20:02:23.791802  654009 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-372245/kubeconfig: {Name:mkbe8926b9014d2af611aee93b1188b72880b6c1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 20:02:23.794064  654009 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1217 20:02:23.804274  654009 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.103.2
	I1217 20:02:23.804312  654009 kubeadm.go:602] duration metric: took 24.900077ms to restartPrimaryControlPlane
	I1217 20:02:23.804325  654009 kubeadm.go:403] duration metric: took 83.918644ms to StartCluster
	I1217 20:02:23.804344  654009 settings.go:142] acquiring lock: {Name:mk01c60672ff2b8f50b037d6096a0a4590636830 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 20:02:23.804414  654009 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22186-372245/kubeconfig
	I1217 20:02:23.806559  654009 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-372245/kubeconfig: {Name:mkbe8926b9014d2af611aee93b1188b72880b6c1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 20:02:23.806797  654009 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1217 20:02:23.806916  654009 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1217 20:02:23.807026  654009 config.go:182] Loaded profile config "newest-cni-420762": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1217 20:02:23.807057  654009 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-420762"
	I1217 20:02:23.807104  654009 addons.go:70] Setting dashboard=true in profile "newest-cni-420762"
	I1217 20:02:23.807128  654009 addons.go:70] Setting default-storageclass=true in profile "newest-cni-420762"
	I1217 20:02:23.807143  654009 addons.go:239] Setting addon dashboard=true in "newest-cni-420762"
	I1217 20:02:23.807148  654009 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-420762"
	W1217 20:02:23.807154  654009 addons.go:248] addon dashboard should already be in state true
	I1217 20:02:23.807189  654009 host.go:66] Checking if "newest-cni-420762" exists ...
	I1217 20:02:23.807224  654009 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-420762"
	W1217 20:02:23.807241  654009 addons.go:248] addon storage-provisioner should already be in state true
	I1217 20:02:23.807281  654009 host.go:66] Checking if "newest-cni-420762" exists ...
	I1217 20:02:23.807491  654009 cli_runner.go:164] Run: docker container inspect newest-cni-420762 --format={{.State.Status}}
	I1217 20:02:23.807671  654009 cli_runner.go:164] Run: docker container inspect newest-cni-420762 --format={{.State.Status}}
	I1217 20:02:23.807732  654009 cli_runner.go:164] Run: docker container inspect newest-cni-420762 --format={{.State.Status}}
	I1217 20:02:23.809502  654009 out.go:179] * Verifying Kubernetes components...
	I1217 20:02:23.810915  654009 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 20:02:23.836016  654009 addons.go:239] Setting addon default-storageclass=true in "newest-cni-420762"
	W1217 20:02:23.836037  654009 addons.go:248] addon default-storageclass should already be in state true
	I1217 20:02:23.836062  654009 host.go:66] Checking if "newest-cni-420762" exists ...
	I1217 20:02:23.836542  654009 cli_runner.go:164] Run: docker container inspect newest-cni-420762 --format={{.State.Status}}
	I1217 20:02:23.836613  654009 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1217 20:02:23.836671  654009 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1217 20:02:23.838096  654009 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 20:02:23.838127  654009 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1217 20:02:23.838196  654009 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-420762
	I1217 20:02:23.838098  654009 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	
	
	==> CRI-O <==
	Dec 17 20:02:13 embed-certs-147021 crio[774]: time="2025-12-17T20:02:13.756510594Z" level=info msg="Starting container: b51824a4fdd339f7255f564d0c24c21056c51caf442bfd36b9bf7ca1ee9d883d" id=1439bc80-0b2b-4d98-8bbd-0650e66288d7 name=/runtime.v1.RuntimeService/StartContainer
	Dec 17 20:02:13 embed-certs-147021 crio[774]: time="2025-12-17T20:02:13.758993044Z" level=info msg="Started container" PID=1913 containerID=b51824a4fdd339f7255f564d0c24c21056c51caf442bfd36b9bf7ca1ee9d883d description=kube-system/coredns-66bc5c9577-wkvhv/coredns id=1439bc80-0b2b-4d98-8bbd-0650e66288d7 name=/runtime.v1.RuntimeService/StartContainer sandboxID=792f81dda76be3b773a918e71a1c939eced9bc7c11f3627f65271b76a38c70d5
	Dec 17 20:02:16 embed-certs-147021 crio[774]: time="2025-12-17T20:02:16.3726713Z" level=info msg="Running pod sandbox: default/busybox/POD" id=151bf016-3072-42c5-9197-29565f9b09aa name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 17 20:02:16 embed-certs-147021 crio[774]: time="2025-12-17T20:02:16.372759249Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 20:02:16 embed-certs-147021 crio[774]: time="2025-12-17T20:02:16.378774044Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:d40bca6dd53f5113f57ee6704019de4abbfe02f728ec05080bcb1c41406ca9ae UID:b9b3f47b-58e5-41d0-a3ca-8afa30e0116e NetNS:/var/run/netns/78217396-629e-448d-a4ba-68e6e7b63ef2 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc0003a1298}] Aliases:map[]}"
	Dec 17 20:02:16 embed-certs-147021 crio[774]: time="2025-12-17T20:02:16.378815064Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Dec 17 20:02:16 embed-certs-147021 crio[774]: time="2025-12-17T20:02:16.390499304Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:d40bca6dd53f5113f57ee6704019de4abbfe02f728ec05080bcb1c41406ca9ae UID:b9b3f47b-58e5-41d0-a3ca-8afa30e0116e NetNS:/var/run/netns/78217396-629e-448d-a4ba-68e6e7b63ef2 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc0003a1298}] Aliases:map[]}"
	Dec 17 20:02:16 embed-certs-147021 crio[774]: time="2025-12-17T20:02:16.390680628Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Dec 17 20:02:16 embed-certs-147021 crio[774]: time="2025-12-17T20:02:16.391705116Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 17 20:02:16 embed-certs-147021 crio[774]: time="2025-12-17T20:02:16.392794729Z" level=info msg="Ran pod sandbox d40bca6dd53f5113f57ee6704019de4abbfe02f728ec05080bcb1c41406ca9ae with infra container: default/busybox/POD" id=151bf016-3072-42c5-9197-29565f9b09aa name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 17 20:02:16 embed-certs-147021 crio[774]: time="2025-12-17T20:02:16.394202779Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=e1c60e4c-2cb0-4286-b526-4139285787f9 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 20:02:16 embed-certs-147021 crio[774]: time="2025-12-17T20:02:16.394355739Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=e1c60e4c-2cb0-4286-b526-4139285787f9 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 20:02:16 embed-certs-147021 crio[774]: time="2025-12-17T20:02:16.394406907Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=e1c60e4c-2cb0-4286-b526-4139285787f9 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 20:02:16 embed-certs-147021 crio[774]: time="2025-12-17T20:02:16.394992415Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=9a220375-29c1-4fde-8189-5aa1602a89cb name=/runtime.v1.ImageService/PullImage
	Dec 17 20:02:16 embed-certs-147021 crio[774]: time="2025-12-17T20:02:16.39641504Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Dec 17 20:02:17 embed-certs-147021 crio[774]: time="2025-12-17T20:02:17.733049079Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=9a220375-29c1-4fde-8189-5aa1602a89cb name=/runtime.v1.ImageService/PullImage
	Dec 17 20:02:17 embed-certs-147021 crio[774]: time="2025-12-17T20:02:17.73377186Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=36134f94-7293-44eb-9ecb-7197dff271cc name=/runtime.v1.ImageService/ImageStatus
	Dec 17 20:02:17 embed-certs-147021 crio[774]: time="2025-12-17T20:02:17.735432372Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=580d31ab-a0cf-4f66-9b48-56f88a2ab935 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 20:02:17 embed-certs-147021 crio[774]: time="2025-12-17T20:02:17.739309896Z" level=info msg="Creating container: default/busybox/busybox" id=f21077ca-9c58-48a7-beb3-3b4a0f326c88 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 17 20:02:17 embed-certs-147021 crio[774]: time="2025-12-17T20:02:17.739440633Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 20:02:17 embed-certs-147021 crio[774]: time="2025-12-17T20:02:17.743729898Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 20:02:17 embed-certs-147021 crio[774]: time="2025-12-17T20:02:17.744191659Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 20:02:17 embed-certs-147021 crio[774]: time="2025-12-17T20:02:17.768406275Z" level=info msg="Created container 1bfab113ddb4757a4d4b73c344aad61d7f10533497a7cb3a18556d4a19495e44: default/busybox/busybox" id=f21077ca-9c58-48a7-beb3-3b4a0f326c88 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 17 20:02:17 embed-certs-147021 crio[774]: time="2025-12-17T20:02:17.769125148Z" level=info msg="Starting container: 1bfab113ddb4757a4d4b73c344aad61d7f10533497a7cb3a18556d4a19495e44" id=8764b0e1-4e41-4027-9467-19b7f73d0561 name=/runtime.v1.RuntimeService/StartContainer
	Dec 17 20:02:17 embed-certs-147021 crio[774]: time="2025-12-17T20:02:17.771142429Z" level=info msg="Started container" PID=1991 containerID=1bfab113ddb4757a4d4b73c344aad61d7f10533497a7cb3a18556d4a19495e44 description=default/busybox/busybox id=8764b0e1-4e41-4027-9467-19b7f73d0561 name=/runtime.v1.RuntimeService/StartContainer sandboxID=d40bca6dd53f5113f57ee6704019de4abbfe02f728ec05080bcb1c41406ca9ae
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                          NAMESPACE
	1bfab113ddb47       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   7 seconds ago       Running             busybox                   0                   d40bca6dd53f5       busybox                                      default
	b51824a4fdd33       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      11 seconds ago      Running             coredns                   0                   792f81dda76be       coredns-66bc5c9577-wkvhv                     kube-system
	a84d8ecdefda2       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      11 seconds ago      Running             storage-provisioner       0                   5c3d7c1646c8b       storage-provisioner                          kube-system
	20c109e81bca7       docker.io/kindest/kindnetd@sha256:7c22558dc06a570d46ea6e8a73b23cdc754eb81f7c08d3441a3171ad359ffc27    22 seconds ago      Running             kindnet-cni               0                   9698516e76d96       kindnet-qp6z8                                kube-system
	bbd8871f30caf       36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691                                      23 seconds ago      Running             kube-proxy                0                   a6289969cbb20       kube-proxy-nwn9n                             kube-system
	cb071d29ab13d       aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c                                      34 seconds ago      Running             kube-apiserver            0                   fa97cdcc48aaa       kube-apiserver-embed-certs-147021            kube-system
	79326439f72cb       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                      34 seconds ago      Running             etcd                      0                   a070fa6a6a7b5       etcd-embed-certs-147021                      kube-system
	3e3580ec5ac81       5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942                                      34 seconds ago      Running             kube-controller-manager   0                   53372d22027d5       kube-controller-manager-embed-certs-147021   kube-system
	d7f5275c5f943       aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78                                      34 seconds ago      Running             kube-scheduler            0                   61ce1e93ef0e8       kube-scheduler-embed-certs-147021            kube-system
	
	
	==> coredns [b51824a4fdd339f7255f564d0c24c21056c51caf442bfd36b9bf7ca1ee9d883d] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:33583 - 5896 "HINFO IN 2736481431655343464.4483724030483214464. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.043877378s
	
	
	==> describe nodes <==
	Name:               embed-certs-147021
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-147021
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2e96f676eb7e96389e85fe0658a4ede4c4ba6924
	                    minikube.k8s.io/name=embed-certs-147021
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_17T20_01_55_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Dec 2025 20:01:52 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-147021
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Dec 2025 20:02:15 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Dec 2025 20:02:15 +0000   Wed, 17 Dec 2025 20:01:50 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Dec 2025 20:02:15 +0000   Wed, 17 Dec 2025 20:01:50 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Dec 2025 20:02:15 +0000   Wed, 17 Dec 2025 20:01:50 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Dec 2025 20:02:15 +0000   Wed, 17 Dec 2025 20:02:13 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    embed-certs-147021
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 99cc213c06a11cdf07b2a4d26942818a
	  System UUID:                c55125f4-5cb9-479d-a732-b6dc1626ae27
	  Boot ID:                    832664c8-407a-4bff-a432-3bbc3f20421e
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.3
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         8s
	  kube-system                 coredns-66bc5c9577-wkvhv                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     24s
	  kube-system                 etcd-embed-certs-147021                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         30s
	  kube-system                 kindnet-qp6z8                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      24s
	  kube-system                 kube-apiserver-embed-certs-147021             250m (3%)     0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 kube-controller-manager-embed-certs-147021    200m (2%)     0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 kube-proxy-nwn9n                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         24s
	  kube-system                 kube-scheduler-embed-certs-147021             100m (1%)     0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         24s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 23s                kube-proxy       
	  Normal  Starting                 34s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  34s (x8 over 34s)  kubelet          Node embed-certs-147021 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    34s (x8 over 34s)  kubelet          Node embed-certs-147021 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     34s (x8 over 34s)  kubelet          Node embed-certs-147021 status is now: NodeHasSufficientPID
	  Normal  Starting                 30s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  30s                kubelet          Node embed-certs-147021 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    30s                kubelet          Node embed-certs-147021 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     30s                kubelet          Node embed-certs-147021 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           25s                node-controller  Node embed-certs-147021 event: Registered Node embed-certs-147021 in Controller
	  Normal  NodeReady                11s                kubelet          Node embed-certs-147021 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 02 bf cf fd 8a f3 08 06
	[  +0.000372] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 46 d7 50 f9 50 96 08 06
	[Dec17 19:26] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000011] ll header: 00000000: 12 b8 6e 1b fb 93 de a2 46 23 bd 1e 08 00
	[  +1.015318] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 12 b8 6e 1b fb 93 de a2 46 23 bd 1e 08 00
	[  +1.023837] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 12 b8 6e 1b fb 93 de a2 46 23 bd 1e 08 00
	[  +1.023872] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 12 b8 6e 1b fb 93 de a2 46 23 bd 1e 08 00
	[  +1.023881] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 12 b8 6e 1b fb 93 de a2 46 23 bd 1e 08 00
	[  +1.023899] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 12 b8 6e 1b fb 93 de a2 46 23 bd 1e 08 00
	[  +2.047807] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: 12 b8 6e 1b fb 93 de a2 46 23 bd 1e 08 00
	[  +4.031540] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000022] ll header: 00000000: 12 b8 6e 1b fb 93 de a2 46 23 bd 1e 08 00
	[  +8.319118] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: 12 b8 6e 1b fb 93 de a2 46 23 bd 1e 08 00
	[ +16.382218] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 12 b8 6e 1b fb 93 de a2 46 23 bd 1e 08 00
	[Dec17 19:27] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 12 b8 6e 1b fb 93 de a2 46 23 bd 1e 08 00
	
	
	==> etcd [79326439f72cb2f579e500667458171eca1c045507a0791f354f54b5874d9240] <==
	{"level":"warn","ts":"2025-12-17T20:01:51.643057Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58382","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T20:01:51.650290Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58408","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T20:01:51.660062Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58420","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T20:01:51.668881Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58440","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T20:01:51.675978Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58458","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T20:01:51.682811Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58476","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T20:01:51.691850Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58498","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T20:01:51.700597Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58514","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T20:01:51.709525Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58530","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T20:01:51.724632Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58564","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T20:01:51.751089Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58576","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T20:01:51.760893Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58608","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T20:01:51.769801Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58630","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T20:01:51.778405Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58644","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T20:01:51.791569Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58654","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T20:01:51.797394Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58684","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T20:01:51.805192Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58716","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T20:01:51.813162Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58738","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T20:01:51.820893Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58754","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T20:01:51.833758Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58776","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T20:01:51.839372Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58800","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T20:01:51.859509Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58822","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T20:01:51.866675Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58846","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T20:01:51.874996Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58872","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T20:01:51.920441Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58882","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 20:02:25 up  1:44,  0 user,  load average: 4.00, 3.42, 2.45
	Linux embed-certs-147021 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [20c109e81bca708bede949d06a53c65cf03033476e4634d131fa8cfe57d3e759] <==
	I1217 20:02:02.741945       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1217 20:02:02.742297       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1217 20:02:02.742476       1 main.go:148] setting mtu 1500 for CNI 
	I1217 20:02:02.742506       1 main.go:178] kindnetd IP family: "ipv4"
	I1217 20:02:02.742527       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-17T20:02:02Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1217 20:02:02.945739       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1217 20:02:02.945808       1 controller.go:381] "Waiting for informer caches to sync"
	I1217 20:02:02.945821       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1217 20:02:03.037601       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1217 20:02:03.346331       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1217 20:02:03.346367       1 metrics.go:72] Registering metrics
	I1217 20:02:03.346448       1 controller.go:711] "Syncing nftables rules"
	I1217 20:02:12.947197       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1217 20:02:12.947331       1 main.go:301] handling current node
	I1217 20:02:22.950166       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1217 20:02:22.950216       1 main.go:301] handling current node
	
	
	==> kube-apiserver [cb071d29ab13dcd50f641c89a31cf0b8e2bed136bc34828e2d5387bf9984d34e] <==
	I1217 20:01:52.430189       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1217 20:01:52.430229       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1217 20:01:52.435349       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1217 20:01:52.438140       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1217 20:01:52.445068       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1217 20:01:52.445245       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1217 20:01:52.616617       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1217 20:01:53.332317       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1217 20:01:53.335936       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1217 20:01:53.335956       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1217 20:01:53.883130       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1217 20:01:53.934698       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1217 20:01:54.038721       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1217 20:01:54.046239       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1217 20:01:54.047831       1 controller.go:667] quota admission added evaluator for: endpoints
	I1217 20:01:54.058178       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1217 20:01:54.374930       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1217 20:01:54.962470       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1217 20:01:54.975781       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1217 20:01:54.985199       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1217 20:02:00.073530       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1217 20:02:00.375668       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1217 20:02:00.382135       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1217 20:02:00.473017       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	E1217 20:02:23.170380       1 conn.go:339] Error on socket receive: read tcp 192.168.85.2:8443->192.168.85.1:38814: use of closed network connection
	
	
	==> kube-controller-manager [3e3580ec5ac814af27e5bb9b3a6c6d1802cbdde8b016d916d63c2316c38c5246] <==
	I1217 20:01:59.370887       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1217 20:01:59.370969       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1217 20:01:59.370975       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1217 20:01:59.370985       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1217 20:01:59.371166       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1217 20:01:59.371546       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1217 20:01:59.371559       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1217 20:01:59.371566       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1217 20:01:59.371735       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1217 20:01:59.372473       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1217 20:01:59.373698       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1217 20:01:59.373758       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1217 20:01:59.373758       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1217 20:01:59.373764       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1217 20:01:59.373788       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1217 20:01:59.373805       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1217 20:01:59.373813       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1217 20:01:59.373820       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1217 20:01:59.376156       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1217 20:01:59.378454       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1217 20:01:59.380934       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1217 20:01:59.381719       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="embed-certs-147021" podCIDRs=["10.244.0.0/24"]
	I1217 20:01:59.388798       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1217 20:01:59.394141       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1217 20:02:14.356633       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [bbd8871f30caf7fd7f3c98a940cde82d1a8e79a715c0813bd34374f3d4d97528] <==
	I1217 20:02:00.924668       1 server_linux.go:53] "Using iptables proxy"
	I1217 20:02:00.998125       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1217 20:02:01.098381       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1217 20:02:01.098424       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1217 20:02:01.098532       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1217 20:02:01.118751       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1217 20:02:01.118801       1 server_linux.go:132] "Using iptables Proxier"
	I1217 20:02:01.124024       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1217 20:02:01.124570       1 server.go:527] "Version info" version="v1.34.3"
	I1217 20:02:01.124616       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1217 20:02:01.126653       1 config.go:403] "Starting serviceCIDR config controller"
	I1217 20:02:01.126680       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1217 20:02:01.126727       1 config.go:200] "Starting service config controller"
	I1217 20:02:01.126733       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1217 20:02:01.126742       1 config.go:309] "Starting node config controller"
	I1217 20:02:01.126766       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1217 20:02:01.126773       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1217 20:02:01.126742       1 config.go:106] "Starting endpoint slice config controller"
	I1217 20:02:01.126781       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1217 20:02:01.226845       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1217 20:02:01.226874       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1217 20:02:01.226900       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [d7f5275c5f943fc942a1cbd70f8d4bbcb90de36bfba40021f68a41b484e55036] <==
	E1217 20:01:52.372380       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1217 20:01:52.372406       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1217 20:01:52.372466       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1217 20:01:52.372469       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1217 20:01:52.372490       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1217 20:01:52.372535       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1217 20:01:52.372539       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1217 20:01:52.372621       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1217 20:01:52.372669       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1217 20:01:52.372671       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1217 20:01:53.199356       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1217 20:01:53.216977       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1217 20:01:53.275251       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1217 20:01:53.279556       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1217 20:01:53.286127       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1217 20:01:53.368845       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1217 20:01:53.379037       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1217 20:01:53.482817       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1217 20:01:53.490972       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1217 20:01:53.510250       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1217 20:01:53.532535       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1217 20:01:53.572971       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1217 20:01:53.641378       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1217 20:01:53.656535       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	I1217 20:01:55.468674       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 17 20:01:55 embed-certs-147021 kubelet[1342]: I1217 20:01:55.927880    1342 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-embed-certs-147021" podStartSLOduration=1.927863292 podStartE2EDuration="1.927863292s" podCreationTimestamp="2025-12-17 20:01:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-17 20:01:55.927766204 +0000 UTC m=+1.169093289" watchObservedRunningTime="2025-12-17 20:01:55.927863292 +0000 UTC m=+1.169190375"
	Dec 17 20:01:55 embed-certs-147021 kubelet[1342]: I1217 20:01:55.955310    1342 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-embed-certs-147021" podStartSLOduration=1.955274748 podStartE2EDuration="1.955274748s" podCreationTimestamp="2025-12-17 20:01:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-17 20:01:55.941009522 +0000 UTC m=+1.182336608" watchObservedRunningTime="2025-12-17 20:01:55.955274748 +0000 UTC m=+1.196601833"
	Dec 17 20:01:55 embed-certs-147021 kubelet[1342]: I1217 20:01:55.975733    1342 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-embed-certs-147021" podStartSLOduration=1.975709282 podStartE2EDuration="1.975709282s" podCreationTimestamp="2025-12-17 20:01:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-17 20:01:55.955661977 +0000 UTC m=+1.196989059" watchObservedRunningTime="2025-12-17 20:01:55.975709282 +0000 UTC m=+1.217036368"
	Dec 17 20:01:55 embed-certs-147021 kubelet[1342]: I1217 20:01:55.989316    1342 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-embed-certs-147021" podStartSLOduration=1.989289774 podStartE2EDuration="1.989289774s" podCreationTimestamp="2025-12-17 20:01:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-17 20:01:55.976315777 +0000 UTC m=+1.217642867" watchObservedRunningTime="2025-12-17 20:01:55.989289774 +0000 UTC m=+1.230616857"
	Dec 17 20:01:59 embed-certs-147021 kubelet[1342]: I1217 20:01:59.430067    1342 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Dec 17 20:01:59 embed-certs-147021 kubelet[1342]: I1217 20:01:59.430852    1342 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Dec 17 20:02:00 embed-certs-147021 kubelet[1342]: I1217 20:02:00.597234    1342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2f98dd22-cea7-49e2-96b4-3025f53bda36-lib-modules\") pod \"kindnet-qp6z8\" (UID: \"2f98dd22-cea7-49e2-96b4-3025f53bda36\") " pod="kube-system/kindnet-qp6z8"
	Dec 17 20:02:00 embed-certs-147021 kubelet[1342]: I1217 20:02:00.597312    1342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6a7ffc94-190c-4ded-8331-cc243b65c2bc-xtables-lock\") pod \"kube-proxy-nwn9n\" (UID: \"6a7ffc94-190c-4ded-8331-cc243b65c2bc\") " pod="kube-system/kube-proxy-nwn9n"
	Dec 17 20:02:00 embed-certs-147021 kubelet[1342]: I1217 20:02:00.597380    1342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6a7ffc94-190c-4ded-8331-cc243b65c2bc-lib-modules\") pod \"kube-proxy-nwn9n\" (UID: \"6a7ffc94-190c-4ded-8331-cc243b65c2bc\") " pod="kube-system/kube-proxy-nwn9n"
	Dec 17 20:02:00 embed-certs-147021 kubelet[1342]: I1217 20:02:00.597459    1342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hsfwp\" (UniqueName: \"kubernetes.io/projected/6a7ffc94-190c-4ded-8331-cc243b65c2bc-kube-api-access-hsfwp\") pod \"kube-proxy-nwn9n\" (UID: \"6a7ffc94-190c-4ded-8331-cc243b65c2bc\") " pod="kube-system/kube-proxy-nwn9n"
	Dec 17 20:02:00 embed-certs-147021 kubelet[1342]: I1217 20:02:00.597543    1342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/2f98dd22-cea7-49e2-96b4-3025f53bda36-cni-cfg\") pod \"kindnet-qp6z8\" (UID: \"2f98dd22-cea7-49e2-96b4-3025f53bda36\") " pod="kube-system/kindnet-qp6z8"
	Dec 17 20:02:00 embed-certs-147021 kubelet[1342]: I1217 20:02:00.597586    1342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2f98dd22-cea7-49e2-96b4-3025f53bda36-xtables-lock\") pod \"kindnet-qp6z8\" (UID: \"2f98dd22-cea7-49e2-96b4-3025f53bda36\") " pod="kube-system/kindnet-qp6z8"
	Dec 17 20:02:00 embed-certs-147021 kubelet[1342]: I1217 20:02:00.597637    1342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hlxch\" (UniqueName: \"kubernetes.io/projected/2f98dd22-cea7-49e2-96b4-3025f53bda36-kube-api-access-hlxch\") pod \"kindnet-qp6z8\" (UID: \"2f98dd22-cea7-49e2-96b4-3025f53bda36\") " pod="kube-system/kindnet-qp6z8"
	Dec 17 20:02:00 embed-certs-147021 kubelet[1342]: I1217 20:02:00.597670    1342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/6a7ffc94-190c-4ded-8331-cc243b65c2bc-kube-proxy\") pod \"kube-proxy-nwn9n\" (UID: \"6a7ffc94-190c-4ded-8331-cc243b65c2bc\") " pod="kube-system/kube-proxy-nwn9n"
	Dec 17 20:02:01 embed-certs-147021 kubelet[1342]: I1217 20:02:01.847536    1342 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-nwn9n" podStartSLOduration=1.8467090069999998 podStartE2EDuration="1.846709007s" podCreationTimestamp="2025-12-17 20:02:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-17 20:02:00.925422888 +0000 UTC m=+6.166749972" watchObservedRunningTime="2025-12-17 20:02:01.846709007 +0000 UTC m=+7.088036095"
	Dec 17 20:02:04 embed-certs-147021 kubelet[1342]: I1217 20:02:04.000069    1342 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-qp6z8" podStartSLOduration=2.27446524 podStartE2EDuration="4.000044189s" podCreationTimestamp="2025-12-17 20:02:00 +0000 UTC" firstStartedPulling="2025-12-17 20:02:00.819119307 +0000 UTC m=+6.060446373" lastFinishedPulling="2025-12-17 20:02:02.544698247 +0000 UTC m=+7.786025322" observedRunningTime="2025-12-17 20:02:02.929018824 +0000 UTC m=+8.170345909" watchObservedRunningTime="2025-12-17 20:02:04.000044189 +0000 UTC m=+9.241371290"
	Dec 17 20:02:13 embed-certs-147021 kubelet[1342]: I1217 20:02:13.361750    1342 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Dec 17 20:02:13 embed-certs-147021 kubelet[1342]: I1217 20:02:13.499577    1342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g9q78\" (UniqueName: \"kubernetes.io/projected/8515815b-8ad1-4db6-9c1e-ac36c14d42ce-kube-api-access-g9q78\") pod \"storage-provisioner\" (UID: \"8515815b-8ad1-4db6-9c1e-ac36c14d42ce\") " pod="kube-system/storage-provisioner"
	Dec 17 20:02:13 embed-certs-147021 kubelet[1342]: I1217 20:02:13.499649    1342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/aa6b430f-e79f-4a53-b8c7-f51dd721cd13-config-volume\") pod \"coredns-66bc5c9577-wkvhv\" (UID: \"aa6b430f-e79f-4a53-b8c7-f51dd721cd13\") " pod="kube-system/coredns-66bc5c9577-wkvhv"
	Dec 17 20:02:13 embed-certs-147021 kubelet[1342]: I1217 20:02:13.499758    1342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5rrjt\" (UniqueName: \"kubernetes.io/projected/aa6b430f-e79f-4a53-b8c7-f51dd721cd13-kube-api-access-5rrjt\") pod \"coredns-66bc5c9577-wkvhv\" (UID: \"aa6b430f-e79f-4a53-b8c7-f51dd721cd13\") " pod="kube-system/coredns-66bc5c9577-wkvhv"
	Dec 17 20:02:13 embed-certs-147021 kubelet[1342]: I1217 20:02:13.499800    1342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/8515815b-8ad1-4db6-9c1e-ac36c14d42ce-tmp\") pod \"storage-provisioner\" (UID: \"8515815b-8ad1-4db6-9c1e-ac36c14d42ce\") " pod="kube-system/storage-provisioner"
	Dec 17 20:02:13 embed-certs-147021 kubelet[1342]: I1217 20:02:13.959623    1342 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-wkvhv" podStartSLOduration=13.95959838 podStartE2EDuration="13.95959838s" podCreationTimestamp="2025-12-17 20:02:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-17 20:02:13.959004663 +0000 UTC m=+19.200331749" watchObservedRunningTime="2025-12-17 20:02:13.95959838 +0000 UTC m=+19.200925467"
	Dec 17 20:02:16 embed-certs-147021 kubelet[1342]: I1217 20:02:16.065114    1342 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=16.065056903 podStartE2EDuration="16.065056903s" podCreationTimestamp="2025-12-17 20:02:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-17 20:02:13.98794982 +0000 UTC m=+19.229276906" watchObservedRunningTime="2025-12-17 20:02:16.065056903 +0000 UTC m=+21.306383989"
	Dec 17 20:02:16 embed-certs-147021 kubelet[1342]: I1217 20:02:16.115414    1342 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mpgph\" (UniqueName: \"kubernetes.io/projected/b9b3f47b-58e5-41d0-a3ca-8afa30e0116e-kube-api-access-mpgph\") pod \"busybox\" (UID: \"b9b3f47b-58e5-41d0-a3ca-8afa30e0116e\") " pod="default/busybox"
	Dec 17 20:02:17 embed-certs-147021 kubelet[1342]: I1217 20:02:17.969801    1342 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=0.629635621 podStartE2EDuration="1.969780215s" podCreationTimestamp="2025-12-17 20:02:16 +0000 UTC" firstStartedPulling="2025-12-17 20:02:16.3946648 +0000 UTC m=+21.635991872" lastFinishedPulling="2025-12-17 20:02:17.734809389 +0000 UTC m=+22.976136466" observedRunningTime="2025-12-17 20:02:17.969448301 +0000 UTC m=+23.210775388" watchObservedRunningTime="2025-12-17 20:02:17.969780215 +0000 UTC m=+23.211107300"
	
	
	==> storage-provisioner [a84d8ecdefda21a7be8c3f539f66d1f144e39d010c74acad1959017bf39d000f] <==
	I1217 20:02:13.768041       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1217 20:02:13.778687       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1217 20:02:13.778825       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1217 20:02:13.781578       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 20:02:13.787959       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1217 20:02:13.788247       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1217 20:02:13.788361       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"910d36f2-445e-4325-a1df-6c5c1d1eea0a", APIVersion:"v1", ResourceVersion:"448", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-147021_5f5cfc48-a1f0-464b-b040-3dfeceaba101 became leader
	I1217 20:02:13.788595       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-147021_5f5cfc48-a1f0-464b-b040-3dfeceaba101!
	W1217 20:02:13.792179       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 20:02:13.799133       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1217 20:02:13.889397       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-147021_5f5cfc48-a1f0-464b-b040-3dfeceaba101!
	W1217 20:02:15.802362       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 20:02:15.829548       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 20:02:17.833232       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 20:02:17.838727       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 20:02:19.842407       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 20:02:19.848308       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 20:02:21.851751       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 20:02:21.856484       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 20:02:23.863022       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 20:02:23.880512       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-147021 -n embed-certs-147021
helpers_test.go:270: (dbg) Run:  kubectl --context embed-certs-147021 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/embed-certs/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (2.76s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (6.11s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-420762 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p newest-cni-420762 --alsologtostderr -v=1: exit status 80 (1.920691279s)

                                                
                                                
-- stdout --
	* Pausing node newest-cni-420762 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1217 20:02:27.867375  657945 out.go:360] Setting OutFile to fd 1 ...
	I1217 20:02:27.867694  657945 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 20:02:27.867707  657945 out.go:374] Setting ErrFile to fd 2...
	I1217 20:02:27.867714  657945 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 20:02:27.868056  657945 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22186-372245/.minikube/bin
	I1217 20:02:27.868421  657945 out.go:368] Setting JSON to false
	I1217 20:02:27.868447  657945 mustload.go:66] Loading cluster: newest-cni-420762
	I1217 20:02:27.868845  657945 config.go:182] Loaded profile config "newest-cni-420762": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1217 20:02:27.869417  657945 cli_runner.go:164] Run: docker container inspect newest-cni-420762 --format={{.State.Status}}
	I1217 20:02:27.888192  657945 host.go:66] Checking if "newest-cni-420762" exists ...
	I1217 20:02:27.888616  657945 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 20:02:27.957645  657945 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:88 OomKillDisable:false NGoroutines:93 SystemTime:2025-12-17 20:02:27.946883751 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1217 20:02:27.958636  657945 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/22186/minikube-v1.37.0-1765965980-22186-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1765965980-22186/minikube-v1.37.0-1765965980-22186-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1765965980-22186-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:newest-cni-420762 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true)
wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1217 20:02:27.961612  657945 out.go:179] * Pausing node newest-cni-420762 ... 
	I1217 20:02:27.962917  657945 host.go:66] Checking if "newest-cni-420762" exists ...
	I1217 20:02:27.963322  657945 ssh_runner.go:195] Run: systemctl --version
	I1217 20:02:27.963378  657945 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-420762
	I1217 20:02:27.982278  657945 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33473 SSHKeyPath:/home/jenkins/minikube-integration/22186-372245/.minikube/machines/newest-cni-420762/id_rsa Username:docker}
	I1217 20:02:28.094669  657945 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 20:02:28.112407  657945 pause.go:52] kubelet running: true
	I1217 20:02:28.112487  657945 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1217 20:02:28.284525  657945 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1217 20:02:28.284619  657945 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1217 20:02:28.361126  657945 cri.go:89] found id: "20324d8b31869dc1f504d321c2a805ce4e571a579d66cb84c215eb02aa3b4f33"
	I1217 20:02:28.361152  657945 cri.go:89] found id: "ff0ed23e8222fe8ec49653837dc558ec15a20e205a665b520caf64cdc5ae60dd"
	I1217 20:02:28.361157  657945 cri.go:89] found id: "7d90e89ed2e6c5e28181da0ddfeb35b77f0b1a43e095576732addaa43e6437ba"
	I1217 20:02:28.361168  657945 cri.go:89] found id: "8544c715ea46d06c40c805d6d1253f17f885eca03855c5b880ed720d0fff20f4"
	I1217 20:02:28.361172  657945 cri.go:89] found id: "64b8df55df5230a0b1d5727316ee323fddc47f3997c667cf27faf9dbec35288f"
	I1217 20:02:28.361177  657945 cri.go:89] found id: "b7259506a4e5b6bee4d005c6c0116262f2d16fb84d5378bc6f468fae3b7d2570"
	I1217 20:02:28.361182  657945 cri.go:89] found id: ""
	I1217 20:02:28.361231  657945 ssh_runner.go:195] Run: sudo runc list -f json
	I1217 20:02:28.375250  657945 retry.go:31] will retry after 354.763711ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T20:02:28Z" level=error msg="open /run/runc: no such file or directory"
	I1217 20:02:28.730884  657945 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 20:02:28.745102  657945 pause.go:52] kubelet running: false
	I1217 20:02:28.745172  657945 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1217 20:02:28.917789  657945 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1217 20:02:28.917871  657945 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1217 20:02:29.013073  657945 cri.go:89] found id: "20324d8b31869dc1f504d321c2a805ce4e571a579d66cb84c215eb02aa3b4f33"
	I1217 20:02:29.013120  657945 cri.go:89] found id: "ff0ed23e8222fe8ec49653837dc558ec15a20e205a665b520caf64cdc5ae60dd"
	I1217 20:02:29.013126  657945 cri.go:89] found id: "7d90e89ed2e6c5e28181da0ddfeb35b77f0b1a43e095576732addaa43e6437ba"
	I1217 20:02:29.013131  657945 cri.go:89] found id: "8544c715ea46d06c40c805d6d1253f17f885eca03855c5b880ed720d0fff20f4"
	I1217 20:02:29.013136  657945 cri.go:89] found id: "64b8df55df5230a0b1d5727316ee323fddc47f3997c667cf27faf9dbec35288f"
	I1217 20:02:29.013141  657945 cri.go:89] found id: "b7259506a4e5b6bee4d005c6c0116262f2d16fb84d5378bc6f468fae3b7d2570"
	I1217 20:02:29.013145  657945 cri.go:89] found id: ""
	I1217 20:02:29.013198  657945 ssh_runner.go:195] Run: sudo runc list -f json
	I1217 20:02:29.027556  657945 retry.go:31] will retry after 351.752969ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T20:02:29Z" level=error msg="open /run/runc: no such file or directory"
	I1217 20:02:29.380206  657945 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 20:02:29.398054  657945 pause.go:52] kubelet running: false
	I1217 20:02:29.398138  657945 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1217 20:02:29.565633  657945 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1217 20:02:29.565712  657945 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1217 20:02:29.663439  657945 cri.go:89] found id: "20324d8b31869dc1f504d321c2a805ce4e571a579d66cb84c215eb02aa3b4f33"
	I1217 20:02:29.663467  657945 cri.go:89] found id: "ff0ed23e8222fe8ec49653837dc558ec15a20e205a665b520caf64cdc5ae60dd"
	I1217 20:02:29.663475  657945 cri.go:89] found id: "7d90e89ed2e6c5e28181da0ddfeb35b77f0b1a43e095576732addaa43e6437ba"
	I1217 20:02:29.663480  657945 cri.go:89] found id: "8544c715ea46d06c40c805d6d1253f17f885eca03855c5b880ed720d0fff20f4"
	I1217 20:02:29.663485  657945 cri.go:89] found id: "64b8df55df5230a0b1d5727316ee323fddc47f3997c667cf27faf9dbec35288f"
	I1217 20:02:29.663491  657945 cri.go:89] found id: "b7259506a4e5b6bee4d005c6c0116262f2d16fb84d5378bc6f468fae3b7d2570"
	I1217 20:02:29.663499  657945 cri.go:89] found id: ""
	I1217 20:02:29.663553  657945 ssh_runner.go:195] Run: sudo runc list -f json
	I1217 20:02:29.687608  657945 out.go:203] 
	W1217 20:02:29.688974  657945 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T20:02:29Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T20:02:29Z" level=error msg="open /run/runc: no such file or directory"
	
	W1217 20:02:29.689002  657945 out.go:285] * 
	* 
	W1217 20:02:29.698119  657945 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1217 20:02:29.699817  657945 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p newest-cni-420762 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect newest-cni-420762
helpers_test.go:244: (dbg) docker inspect newest-cni-420762:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "f638a198c1fac512e27e9dc5b5e8951d602e997655ed0515839658576a7bc882",
	        "Created": "2025-12-17T20:01:35.486713573Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 654236,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-17T20:02:16.487717516Z",
	            "FinishedAt": "2025-12-17T20:02:15.073093508Z"
	        },
	        "Image": "sha256:e3abeb065413b7566dd42e98e204ab3ad174790743f1f5cd427036c11b49d7f1",
	        "ResolvConfPath": "/var/lib/docker/containers/f638a198c1fac512e27e9dc5b5e8951d602e997655ed0515839658576a7bc882/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/f638a198c1fac512e27e9dc5b5e8951d602e997655ed0515839658576a7bc882/hostname",
	        "HostsPath": "/var/lib/docker/containers/f638a198c1fac512e27e9dc5b5e8951d602e997655ed0515839658576a7bc882/hosts",
	        "LogPath": "/var/lib/docker/containers/f638a198c1fac512e27e9dc5b5e8951d602e997655ed0515839658576a7bc882/f638a198c1fac512e27e9dc5b5e8951d602e997655ed0515839658576a7bc882-json.log",
	        "Name": "/newest-cni-420762",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-420762:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "newest-cni-420762",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "f638a198c1fac512e27e9dc5b5e8951d602e997655ed0515839658576a7bc882",
	                "LowerDir": "/var/lib/docker/overlay2/1752dfd752ba541c00ea437bb3a181f09772c91428c90506c33b812d67f94809-init/diff:/var/lib/docker/overlay2/29727d664a8119dcd8d22d923cfdfa7d86f99088879bf2a113d907b51116eb38/diff",
	                "MergedDir": "/var/lib/docker/overlay2/1752dfd752ba541c00ea437bb3a181f09772c91428c90506c33b812d67f94809/merged",
	                "UpperDir": "/var/lib/docker/overlay2/1752dfd752ba541c00ea437bb3a181f09772c91428c90506c33b812d67f94809/diff",
	                "WorkDir": "/var/lib/docker/overlay2/1752dfd752ba541c00ea437bb3a181f09772c91428c90506c33b812d67f94809/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-420762",
	                "Source": "/var/lib/docker/volumes/newest-cni-420762/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-420762",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-420762",
	                "name.minikube.sigs.k8s.io": "newest-cni-420762",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "ec9277c5a29195077d1885667b8da9a02c93c68737d286cd68b39606620e2984",
	            "SandboxKey": "/var/run/docker/netns/ec9277c5a291",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33473"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33474"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33477"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33475"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33476"
	                    }
	                ]
	            },
	            "Networks": {
	                "newest-cni-420762": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "c599555d4217815d05b632e5621ed20805e2fb5e529f70229a8fb07f9886d72c",
	                    "EndpointID": "b5e97611e782f7026ead0b051aa30ebcf50b984d4d1038df4a213df990e38e01",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "MacAddress": "02:07:5d:50:b8:69",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-420762",
	                        "f638a198c1fa"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-420762 -n newest-cni-420762
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-420762 -n newest-cni-420762: exit status 2 (405.637618ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-420762 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p newest-cni-420762 logs -n 25: (1.170204239s)
helpers_test.go:261: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────
────────────┐
	│ COMMAND │                                                                                                                        ARGS                                                                                                                        │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────
────────────┤
	│ start   │ -p default-k8s-diff-port-759234 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3                                                                           │ default-k8s-diff-port-759234 │ jenkins │ v1.37.0 │ 17 Dec 25 20:00 UTC │ 17 Dec 25 20:01 UTC │
	│ image   │ no-preload-832842 image list --format=json                                                                                                                                                                                                         │ no-preload-832842            │ jenkins │ v1.37.0 │ 17 Dec 25 20:01 UTC │ 17 Dec 25 20:01 UTC │
	│ pause   │ -p no-preload-832842 --alsologtostderr -v=1                                                                                                                                                                                                        │ no-preload-832842            │ jenkins │ v1.37.0 │ 17 Dec 25 20:01 UTC │                     │
	│ image   │ old-k8s-version-894575 image list --format=json                                                                                                                                                                                                    │ old-k8s-version-894575       │ jenkins │ v1.37.0 │ 17 Dec 25 20:01 UTC │ 17 Dec 25 20:01 UTC │
	│ pause   │ -p old-k8s-version-894575 --alsologtostderr -v=1                                                                                                                                                                                                   │ old-k8s-version-894575       │ jenkins │ v1.37.0 │ 17 Dec 25 20:01 UTC │                     │
	│ delete  │ -p no-preload-832842                                                                                                                                                                                                                               │ no-preload-832842            │ jenkins │ v1.37.0 │ 17 Dec 25 20:01 UTC │ 17 Dec 25 20:01 UTC │
	│ delete  │ -p old-k8s-version-894575                                                                                                                                                                                                                          │ old-k8s-version-894575       │ jenkins │ v1.37.0 │ 17 Dec 25 20:01 UTC │ 17 Dec 25 20:01 UTC │
	│ delete  │ -p no-preload-832842                                                                                                                                                                                                                               │ no-preload-832842            │ jenkins │ v1.37.0 │ 17 Dec 25 20:01 UTC │ 17 Dec 25 20:01 UTC │
	│ start   │ -p newest-cni-420762 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1 │ newest-cni-420762            │ jenkins │ v1.37.0 │ 17 Dec 25 20:01 UTC │ 17 Dec 25 20:01 UTC │
	│ delete  │ -p old-k8s-version-894575                                                                                                                                                                                                                          │ old-k8s-version-894575       │ jenkins │ v1.37.0 │ 17 Dec 25 20:01 UTC │ 17 Dec 25 20:01 UTC │
	│ start   │ -p embed-certs-147021 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3                                                                                             │ embed-certs-147021           │ jenkins │ v1.37.0 │ 17 Dec 25 20:01 UTC │ 17 Dec 25 20:02 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-759234 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                 │ default-k8s-diff-port-759234 │ jenkins │ v1.37.0 │ 17 Dec 25 20:01 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-759234 --alsologtostderr -v=3                                                                                                                                                                                             │ default-k8s-diff-port-759234 │ jenkins │ v1.37.0 │ 17 Dec 25 20:01 UTC │ 17 Dec 25 20:01 UTC │
	│ addons  │ enable metrics-server -p newest-cni-420762 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                            │ newest-cni-420762            │ jenkins │ v1.37.0 │ 17 Dec 25 20:01 UTC │                     │
	│ addons  │ enable dashboard -p default-k8s-diff-port-759234 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                            │ default-k8s-diff-port-759234 │ jenkins │ v1.37.0 │ 17 Dec 25 20:01 UTC │ 17 Dec 25 20:01 UTC │
	│ start   │ -p default-k8s-diff-port-759234 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3                                                                           │ default-k8s-diff-port-759234 │ jenkins │ v1.37.0 │ 17 Dec 25 20:01 UTC │                     │
	│ stop    │ -p newest-cni-420762 --alsologtostderr -v=3                                                                                                                                                                                                        │ newest-cni-420762            │ jenkins │ v1.37.0 │ 17 Dec 25 20:01 UTC │ 17 Dec 25 20:02 UTC │
	│ addons  │ enable dashboard -p newest-cni-420762 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                       │ newest-cni-420762            │ jenkins │ v1.37.0 │ 17 Dec 25 20:02 UTC │ 17 Dec 25 20:02 UTC │
	│ start   │ -p newest-cni-420762 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1 │ newest-cni-420762            │ jenkins │ v1.37.0 │ 17 Dec 25 20:02 UTC │ 17 Dec 25 20:02 UTC │
	│ addons  │ enable metrics-server -p embed-certs-147021 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                           │ embed-certs-147021           │ jenkins │ v1.37.0 │ 17 Dec 25 20:02 UTC │                     │
	│ start   │ -p kubernetes-upgrade-322567 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio                                                                                                                                  │ kubernetes-upgrade-322567    │ jenkins │ v1.37.0 │ 17 Dec 25 20:02 UTC │                     │
	│ start   │ -p kubernetes-upgrade-322567 --memory=3072 --kubernetes-version=v1.35.0-rc.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-322567    │ jenkins │ v1.37.0 │ 17 Dec 25 20:02 UTC │                     │
	│ stop    │ -p embed-certs-147021 --alsologtostderr -v=3                                                                                                                                                                                                       │ embed-certs-147021           │ jenkins │ v1.37.0 │ 17 Dec 25 20:02 UTC │                     │
	│ image   │ newest-cni-420762 image list --format=json                                                                                                                                                                                                         │ newest-cni-420762            │ jenkins │ v1.37.0 │ 17 Dec 25 20:02 UTC │ 17 Dec 25 20:02 UTC │
	│ pause   │ -p newest-cni-420762 --alsologtostderr -v=1                                                                                                                                                                                                        │ newest-cni-420762            │ jenkins │ v1.37.0 │ 17 Dec 25 20:02 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────
────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/17 20:02:25
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1217 20:02:25.111745  656592 out.go:360] Setting OutFile to fd 1 ...
	I1217 20:02:25.112015  656592 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 20:02:25.112026  656592 out.go:374] Setting ErrFile to fd 2...
	I1217 20:02:25.112031  656592 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 20:02:25.112260  656592 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22186-372245/.minikube/bin
	I1217 20:02:25.112764  656592 out.go:368] Setting JSON to false
	I1217 20:02:25.114229  656592 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":6296,"bootTime":1765995449,"procs":364,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1217 20:02:25.114300  656592 start.go:143] virtualization: kvm guest
	I1217 20:02:25.116958  656592 out.go:179] * [kubernetes-upgrade-322567] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1217 20:02:25.118368  656592 out.go:179]   - MINIKUBE_LOCATION=22186
	I1217 20:02:25.118387  656592 notify.go:221] Checking for updates...
	I1217 20:02:25.120938  656592 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 20:02:25.123211  656592 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22186-372245/kubeconfig
	I1217 20:02:25.124640  656592 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22186-372245/.minikube
	I1217 20:02:25.126369  656592 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1217 20:02:25.127858  656592 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 20:02:25.129625  656592 config.go:182] Loaded profile config "kubernetes-upgrade-322567": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1217 20:02:25.130470  656592 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 20:02:25.183444  656592 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1217 20:02:25.183615  656592 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 20:02:25.263267  656592 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:78 OomKillDisable:false NGoroutines:86 SystemTime:2025-12-17 20:02:25.25102943 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1217 20:02:25.263595  656592 docker.go:319] overlay module found
	I1217 20:02:25.265394  656592 out.go:179] * Using the docker driver based on existing profile
	I1217 20:02:25.266659  656592 start.go:309] selected driver: docker
	I1217 20:02:25.266682  656592 start.go:927] validating driver "docker" against &{Name:kubernetes-upgrade-322567 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:kubernetes-upgrade-322567 Namespace:default APISer
verHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cust
omQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 20:02:25.266800  656592 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 20:02:25.267678  656592 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 20:02:25.345311  656592 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:78 OomKillDisable:false NGoroutines:86 SystemTime:2025-12-17 20:02:25.332645151 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1217 20:02:25.345593  656592 cni.go:84] Creating CNI manager for ""
	I1217 20:02:25.345651  656592 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1217 20:02:25.345679  656592 start.go:353] cluster config:
	{Name:kubernetes-upgrade-322567 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:kubernetes-upgrade-322567 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:c
luster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSoc
k: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 20:02:25.347071  656592 out.go:179] * Starting "kubernetes-upgrade-322567" primary control-plane node in "kubernetes-upgrade-322567" cluster
	I1217 20:02:25.348175  656592 cache.go:134] Beginning downloading kic base image for docker with crio
	I1217 20:02:25.349380  656592 out.go:179] * Pulling base image v0.0.48-1765966054-22186 ...
	I1217 20:02:25.350410  656592 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime crio
	I1217 20:02:25.350448  656592 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22186-372245/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-cri-o-overlay-amd64.tar.lz4
	I1217 20:02:25.350458  656592 cache.go:65] Caching tarball of preloaded images
	I1217 20:02:25.350472  656592 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 in local docker daemon
	I1217 20:02:25.350572  656592 preload.go:238] Found /home/jenkins/minikube-integration/22186-372245/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1217 20:02:25.350586  656592 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-rc.1 on crio
	I1217 20:02:25.350686  656592 profile.go:143] Saving config to /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/kubernetes-upgrade-322567/config.json ...
	I1217 20:02:25.378573  656592 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 in local docker daemon, skipping pull
	I1217 20:02:25.378609  656592 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 exists in daemon, skipping load
	I1217 20:02:25.378639  656592 cache.go:243] Successfully downloaded all kic artifacts
	I1217 20:02:25.378681  656592 start.go:360] acquireMachinesLock for kubernetes-upgrade-322567: {Name:mk564afb625ef099e3d779cfe3fa06e9fed195e9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 20:02:25.378754  656592 start.go:364] duration metric: took 50.502µs to acquireMachinesLock for "kubernetes-upgrade-322567"
	I1217 20:02:25.378776  656592 start.go:96] Skipping create...Using existing machine configuration
	I1217 20:02:25.378783  656592 fix.go:54] fixHost starting: 
	I1217 20:02:25.379132  656592 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-322567 --format={{.State.Status}}
	I1217 20:02:25.403772  656592 fix.go:112] recreateIfNeeded on kubernetes-upgrade-322567: state=Running err=<nil>
	W1217 20:02:25.403799  656592 fix.go:138] unexpected machine state, will restart: <nil>
	W1217 20:02:21.822105  649079 pod_ready.go:104] pod "coredns-66bc5c9577-lv4jd" is not "Ready", error: <nil>
	W1217 20:02:23.828751  649079 pod_ready.go:104] pod "coredns-66bc5c9577-lv4jd" is not "Ready", error: <nil>
	I1217 20:02:23.839611  654009 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1217 20:02:23.839634  654009 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1217 20:02:23.839694  654009 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-420762
	I1217 20:02:23.879811  654009 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33473 SSHKeyPath:/home/jenkins/minikube-integration/22186-372245/.minikube/machines/newest-cni-420762/id_rsa Username:docker}
	I1217 20:02:23.880004  654009 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1217 20:02:23.880022  654009 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1217 20:02:23.880259  654009 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-420762
	I1217 20:02:23.887337  654009 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33473 SSHKeyPath:/home/jenkins/minikube-integration/22186-372245/.minikube/machines/newest-cni-420762/id_rsa Username:docker}
	I1217 20:02:23.909952  654009 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33473 SSHKeyPath:/home/jenkins/minikube-integration/22186-372245/.minikube/machines/newest-cni-420762/id_rsa Username:docker}
	I1217 20:02:23.977280  654009 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 20:02:23.996392  654009 api_server.go:52] waiting for apiserver process to appear ...
	I1217 20:02:23.996648  654009 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:02:24.012014  654009 api_server.go:72] duration metric: took 205.174874ms to wait for apiserver process to appear ...
	I1217 20:02:24.012046  654009 api_server.go:88] waiting for apiserver healthz status ...
	I1217 20:02:24.012124  654009 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1217 20:02:24.018737  654009 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 20:02:24.034420  654009 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1217 20:02:24.034449  654009 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1217 20:02:24.048004  654009 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1217 20:02:24.055283  654009 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1217 20:02:24.055314  654009 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1217 20:02:24.075200  654009 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1217 20:02:24.075229  654009 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1217 20:02:24.103355  654009 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1217 20:02:24.103383  654009 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1217 20:02:24.121143  654009 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1217 20:02:24.121170  654009 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1217 20:02:24.140025  654009 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1217 20:02:24.140053  654009 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1217 20:02:24.155820  654009 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1217 20:02:24.155842  654009 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1217 20:02:24.173804  654009 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1217 20:02:24.173832  654009 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1217 20:02:24.190646  654009 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1217 20:02:24.190677  654009 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1217 20:02:24.211979  654009 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1217 20:02:25.736027  654009 api_server.go:279] https://192.168.103.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1217 20:02:25.736087  654009 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1217 20:02:25.736105  654009 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1217 20:02:25.748234  654009 api_server.go:279] https://192.168.103.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\": RBAC: clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found","reason":"Forbidden","details":{},"code":403}
	W1217 20:02:25.748275  654009 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\": RBAC: clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found","reason":"Forbidden","details":{},"code":403}
	I1217 20:02:26.013072  654009 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1217 20:02:26.018374  654009 api_server.go:279] https://192.168.103.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1217 20:02:26.018411  654009 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1217 20:02:26.452828  654009 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.434052873s)
	I1217 20:02:26.452898  654009 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.404859516s)
	I1217 20:02:26.453047  654009 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.241028978s)
	I1217 20:02:26.455161  654009 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-420762 addons enable metrics-server
	
	I1217 20:02:26.468338  654009 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1217 20:02:26.469958  654009 addons.go:530] duration metric: took 2.663043743s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1217 20:02:26.513119  654009 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1217 20:02:26.517769  654009 api_server.go:279] https://192.168.103.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1217 20:02:26.517802  654009 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1217 20:02:27.012177  654009 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1217 20:02:27.017530  654009 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1217 20:02:27.018608  654009 api_server.go:141] control plane version: v1.35.0-rc.1
	I1217 20:02:27.018656  654009 api_server.go:131] duration metric: took 3.006587162s to wait for apiserver health ...
	I1217 20:02:27.018669  654009 system_pods.go:43] waiting for kube-system pods to appear ...
	I1217 20:02:27.022470  654009 system_pods.go:59] 8 kube-system pods found
	I1217 20:02:27.022502  654009 system_pods.go:61] "coredns-7d764666f9-jsv2j" [262483f9-bcc1-4054-871a-16cfad4a4abd] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1217 20:02:27.022530  654009 system_pods.go:61] "etcd-newest-cni-420762" [70516caa-a886-4a08-95db-bc22f8c6a7d3] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1217 20:02:27.022548  654009 system_pods.go:61] "kindnet-2f44p" [1888eaab-a42f-4c23-87e4-6c698a41af87] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1217 20:02:27.022556  654009 system_pods.go:61] "kube-apiserver-newest-cni-420762" [8fa67084-5bff-41b5-bdfa-65290314913d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1217 20:02:27.022564  654009 system_pods.go:61] "kube-controller-manager-newest-cni-420762" [732ac716-843a-468b-8ed7-4b94e35445d0] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1217 20:02:27.022569  654009 system_pods.go:61] "kube-proxy-qpt8z" [5bbdb455-62b1-48ac-a4d9-b930a3dc010f] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1217 20:02:27.022574  654009 system_pods.go:61] "kube-scheduler-newest-cni-420762" [ae106497-db01-4129-ad94-7e637ad3278c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1217 20:02:27.022579  654009 system_pods.go:61] "storage-provisioner" [4d3bd70b-556b-4c14-a933-2636b424730f] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1217 20:02:27.022587  654009 system_pods.go:74] duration metric: took 3.905193ms to wait for pod list to return data ...
	I1217 20:02:27.022598  654009 default_sa.go:34] waiting for default service account to be created ...
	I1217 20:02:27.030693  654009 default_sa.go:45] found service account: "default"
	I1217 20:02:27.030723  654009 default_sa.go:55] duration metric: took 8.117112ms for default service account to be created ...
	I1217 20:02:27.030754  654009 kubeadm.go:587] duration metric: took 3.223906598s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1217 20:02:27.030776  654009 node_conditions.go:102] verifying NodePressure condition ...
	I1217 20:02:27.033915  654009 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1217 20:02:27.033943  654009 node_conditions.go:123] node cpu capacity is 8
	I1217 20:02:27.033959  654009 node_conditions.go:105] duration metric: took 3.177321ms to run NodePressure ...
	I1217 20:02:27.033973  654009 start.go:242] waiting for startup goroutines ...
	I1217 20:02:27.033983  654009 start.go:247] waiting for cluster config update ...
	I1217 20:02:27.034002  654009 start.go:256] writing updated cluster config ...
	I1217 20:02:27.034364  654009 ssh_runner.go:195] Run: rm -f paused
	I1217 20:02:27.087374  654009 start.go:625] kubectl: 1.35.0, cluster: 1.35.0-rc.1 (minor skew: 0)
	I1217 20:02:27.093218  654009 out.go:179] * Done! kubectl is now configured to use "newest-cni-420762" cluster and "default" namespace by default
	I1217 20:02:25.405565  656592 out.go:252] * Updating the running docker "kubernetes-upgrade-322567" container ...
	I1217 20:02:25.405598  656592 machine.go:94] provisionDockerMachine start ...
	I1217 20:02:25.405679  656592 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-322567
	I1217 20:02:25.428542  656592 main.go:143] libmachine: Using SSH client type: native
	I1217 20:02:25.428878  656592 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33408 <nil> <nil>}
	I1217 20:02:25.428899  656592 main.go:143] libmachine: About to run SSH command:
	hostname
	I1217 20:02:25.594013  656592 main.go:143] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-322567
	
	I1217 20:02:25.594038  656592 ubuntu.go:182] provisioning hostname "kubernetes-upgrade-322567"
	I1217 20:02:25.594118  656592 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-322567
	I1217 20:02:25.622236  656592 main.go:143] libmachine: Using SSH client type: native
	I1217 20:02:25.622582  656592 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33408 <nil> <nil>}
	I1217 20:02:25.622601  656592 main.go:143] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-322567 && echo "kubernetes-upgrade-322567" | sudo tee /etc/hostname
	I1217 20:02:25.815712  656592 main.go:143] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-322567
	
	I1217 20:02:25.815825  656592 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-322567
	I1217 20:02:25.849231  656592 main.go:143] libmachine: Using SSH client type: native
	I1217 20:02:25.849797  656592 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33408 <nil> <nil>}
	I1217 20:02:25.849882  656592 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-322567' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-322567/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-322567' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1217 20:02:26.025893  656592 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1217 20:02:26.025944  656592 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22186-372245/.minikube CaCertPath:/home/jenkins/minikube-integration/22186-372245/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22186-372245/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22186-372245/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22186-372245/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22186-372245/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22186-372245/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22186-372245/.minikube}
	I1217 20:02:26.025973  656592 ubuntu.go:190] setting up certificates
	I1217 20:02:26.025987  656592 provision.go:84] configureAuth start
	I1217 20:02:26.026053  656592 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-322567
	I1217 20:02:26.052308  656592 provision.go:143] copyHostCerts
	I1217 20:02:26.052397  656592 exec_runner.go:144] found /home/jenkins/minikube-integration/22186-372245/.minikube/ca.pem, removing ...
	I1217 20:02:26.052421  656592 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22186-372245/.minikube/ca.pem
	I1217 20:02:26.052512  656592 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22186-372245/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22186-372245/.minikube/ca.pem (1082 bytes)
	I1217 20:02:26.052651  656592 exec_runner.go:144] found /home/jenkins/minikube-integration/22186-372245/.minikube/cert.pem, removing ...
	I1217 20:02:26.052666  656592 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22186-372245/.minikube/cert.pem
	I1217 20:02:26.052709  656592 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22186-372245/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22186-372245/.minikube/cert.pem (1123 bytes)
	I1217 20:02:26.052804  656592 exec_runner.go:144] found /home/jenkins/minikube-integration/22186-372245/.minikube/key.pem, removing ...
	I1217 20:02:26.052820  656592 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22186-372245/.minikube/key.pem
	I1217 20:02:26.052859  656592 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22186-372245/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22186-372245/.minikube/key.pem (1675 bytes)
	I1217 20:02:26.052959  656592 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22186-372245/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22186-372245/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22186-372245/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-322567 san=[127.0.0.1 192.168.76.2 kubernetes-upgrade-322567 localhost minikube]
	I1217 20:02:26.240205  656592 provision.go:177] copyRemoteCerts
	I1217 20:02:26.240276  656592 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1217 20:02:26.240321  656592 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-322567
	I1217 20:02:26.265735  656592 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33408 SSHKeyPath:/home/jenkins/minikube-integration/22186-372245/.minikube/machines/kubernetes-upgrade-322567/id_rsa Username:docker}
	I1217 20:02:26.391513  656592 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1217 20:02:26.413011  656592 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1217 20:02:26.433987  656592 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1217 20:02:26.454028  656592 provision.go:87] duration metric: took 428.028465ms to configureAuth
	I1217 20:02:26.454056  656592 ubuntu.go:206] setting minikube options for container-runtime
	I1217 20:02:26.454354  656592 config.go:182] Loaded profile config "kubernetes-upgrade-322567": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1217 20:02:26.454467  656592 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-322567
	I1217 20:02:26.476177  656592 main.go:143] libmachine: Using SSH client type: native
	I1217 20:02:26.476419  656592 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33408 <nil> <nil>}
	I1217 20:02:26.476441  656592 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1217 20:02:27.083678  656592 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1217 20:02:27.083712  656592 machine.go:97] duration metric: took 1.678106915s to provisionDockerMachine
	I1217 20:02:27.083726  656592 start.go:293] postStartSetup for "kubernetes-upgrade-322567" (driver="docker")
	I1217 20:02:27.083756  656592 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1217 20:02:27.083834  656592 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1217 20:02:27.083882  656592 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-322567
	I1217 20:02:27.105029  656592 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33408 SSHKeyPath:/home/jenkins/minikube-integration/22186-372245/.minikube/machines/kubernetes-upgrade-322567/id_rsa Username:docker}
	I1217 20:02:27.216380  656592 ssh_runner.go:195] Run: cat /etc/os-release
	I1217 20:02:27.220903  656592 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1217 20:02:27.220933  656592 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1217 20:02:27.220945  656592 filesync.go:126] Scanning /home/jenkins/minikube-integration/22186-372245/.minikube/addons for local assets ...
	I1217 20:02:27.220991  656592 filesync.go:126] Scanning /home/jenkins/minikube-integration/22186-372245/.minikube/files for local assets ...
	I1217 20:02:27.221070  656592 filesync.go:149] local asset: /home/jenkins/minikube-integration/22186-372245/.minikube/files/etc/ssl/certs/3757972.pem -> 3757972.pem in /etc/ssl/certs
	I1217 20:02:27.221220  656592 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1217 20:02:27.230137  656592 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/files/etc/ssl/certs/3757972.pem --> /etc/ssl/certs/3757972.pem (1708 bytes)
	I1217 20:02:27.250752  656592 start.go:296] duration metric: took 167.002454ms for postStartSetup
	I1217 20:02:27.250840  656592 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 20:02:27.250902  656592 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-322567
	I1217 20:02:27.272239  656592 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33408 SSHKeyPath:/home/jenkins/minikube-integration/22186-372245/.minikube/machines/kubernetes-upgrade-322567/id_rsa Username:docker}
	I1217 20:02:27.385314  656592 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1217 20:02:27.399439  656592 fix.go:56] duration metric: took 2.020645251s for fixHost
	I1217 20:02:27.399474  656592 start.go:83] releasing machines lock for "kubernetes-upgrade-322567", held for 2.020707503s
	I1217 20:02:27.399569  656592 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-322567
	I1217 20:02:27.434025  656592 ssh_runner.go:195] Run: cat /version.json
	I1217 20:02:27.434111  656592 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-322567
	I1217 20:02:27.434346  656592 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1217 20:02:27.434448  656592 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-322567
	I1217 20:02:27.461107  656592 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33408 SSHKeyPath:/home/jenkins/minikube-integration/22186-372245/.minikube/machines/kubernetes-upgrade-322567/id_rsa Username:docker}
	I1217 20:02:27.461505  656592 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33408 SSHKeyPath:/home/jenkins/minikube-integration/22186-372245/.minikube/machines/kubernetes-upgrade-322567/id_rsa Username:docker}
	I1217 20:02:27.643742  656592 ssh_runner.go:195] Run: systemctl --version
	I1217 20:02:27.651827  656592 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1217 20:02:27.694427  656592 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1217 20:02:27.699906  656592 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1217 20:02:27.700006  656592 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1217 20:02:27.708402  656592 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1217 20:02:27.708431  656592 start.go:496] detecting cgroup driver to use...
	I1217 20:02:27.708465  656592 detect.go:190] detected "systemd" cgroup driver on host os
	I1217 20:02:27.708511  656592 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1217 20:02:27.727444  656592 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1217 20:02:27.742375  656592 docker.go:218] disabling cri-docker service (if available) ...
	I1217 20:02:27.742438  656592 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1217 20:02:27.762488  656592 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1217 20:02:27.779761  656592 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1217 20:02:27.900288  656592 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1217 20:02:28.014144  656592 docker.go:234] disabling docker service ...
	I1217 20:02:28.014233  656592 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1217 20:02:28.033012  656592 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1217 20:02:28.046924  656592 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1217 20:02:28.174768  656592 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1217 20:02:28.288548  656592 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1217 20:02:28.301658  656592 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1217 20:02:28.320927  656592 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1217 20:02:28.320994  656592 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:02:28.331991  656592 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1217 20:02:28.332070  656592 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:02:28.343615  656592 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:02:28.355210  656592 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:02:28.366355  656592 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1217 20:02:28.376724  656592 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:02:28.386830  656592 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:02:28.395696  656592 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:02:28.405059  656592 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1217 20:02:28.413336  656592 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1217 20:02:28.422321  656592 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 20:02:28.535923  656592 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1217 20:02:28.724835  656592 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1217 20:02:28.724927  656592 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1217 20:02:28.729280  656592 start.go:564] Will wait 60s for crictl version
	I1217 20:02:28.729342  656592 ssh_runner.go:195] Run: which crictl
	I1217 20:02:28.733169  656592 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1217 20:02:28.761146  656592 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1217 20:02:28.761245  656592 ssh_runner.go:195] Run: crio --version
	I1217 20:02:28.804868  656592 ssh_runner.go:195] Run: crio --version
	I1217 20:02:28.844797  656592 out.go:179] * Preparing Kubernetes v1.35.0-rc.1 on CRI-O 1.34.3 ...
	I1217 20:02:28.846267  656592 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-322567 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1217 20:02:28.876854  656592 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1217 20:02:28.891514  656592 kubeadm.go:884] updating cluster {Name:kubernetes-upgrade-322567 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:kubernetes-upgrade-322567 Namespace:default APIServerHAVIP: APIServ
erName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePat
h: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1217 20:02:28.891669  656592 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime crio
	I1217 20:02:28.891731  656592 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 20:02:28.937063  656592 crio.go:514] all images are preloaded for cri-o runtime.
	I1217 20:02:28.937119  656592 crio.go:433] Images already preloaded, skipping extraction
	I1217 20:02:28.937186  656592 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 20:02:28.971786  656592 crio.go:514] all images are preloaded for cri-o runtime.
	I1217 20:02:28.971815  656592 cache_images.go:86] Images are preloaded, skipping loading
	I1217 20:02:28.971825  656592 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0-rc.1 crio true true} ...
	I1217 20:02:28.971977  656592 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-rc.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=kubernetes-upgrade-322567 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-rc.1 ClusterName:kubernetes-upgrade-322567 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1217 20:02:28.972133  656592 ssh_runner.go:195] Run: crio config
	I1217 20:02:29.036908  656592 cni.go:84] Creating CNI manager for ""
	I1217 20:02:29.036930  656592 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1217 20:02:29.036957  656592 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1217 20:02:29.036980  656592 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0-rc.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-322567 NodeName:kubernetes-upgrade-322567 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.cr
t StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1217 20:02:29.037110  656592 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "kubernetes-upgrade-322567"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-rc.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1217 20:02:29.037175  656592 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-rc.1
	I1217 20:02:29.045827  656592 binaries.go:51] Found k8s binaries, skipping transfer
	I1217 20:02:29.045893  656592 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1217 20:02:29.054679  656592 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I1217 20:02:29.068635  656592 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1217 20:02:29.084451  656592 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2226 bytes)
	I1217 20:02:29.099946  656592 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1217 20:02:29.105193  656592 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 20:02:29.229942  656592 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 20:02:29.247062  656592 certs.go:69] Setting up /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/kubernetes-upgrade-322567 for IP: 192.168.76.2
	I1217 20:02:29.247119  656592 certs.go:195] generating shared ca certs ...
	I1217 20:02:29.247142  656592 certs.go:227] acquiring lock for ca certs: {Name:mk6c0a4a99609de13fb0b54aca94f9165cc7856c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 20:02:29.247326  656592 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22186-372245/.minikube/ca.key
	I1217 20:02:29.247395  656592 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22186-372245/.minikube/proxy-client-ca.key
	I1217 20:02:29.247409  656592 certs.go:257] generating profile certs ...
	I1217 20:02:29.247534  656592 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/kubernetes-upgrade-322567/client.key
	I1217 20:02:29.247600  656592 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/kubernetes-upgrade-322567/apiserver.key.2db7b9a3
	I1217 20:02:29.247663  656592 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/kubernetes-upgrade-322567/proxy-client.key
	I1217 20:02:29.247822  656592 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-372245/.minikube/certs/375797.pem (1338 bytes)
	W1217 20:02:29.247870  656592 certs.go:480] ignoring /home/jenkins/minikube-integration/22186-372245/.minikube/certs/375797_empty.pem, impossibly tiny 0 bytes
	I1217 20:02:29.247886  656592 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-372245/.minikube/certs/ca-key.pem (1675 bytes)
	I1217 20:02:29.247928  656592 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-372245/.minikube/certs/ca.pem (1082 bytes)
	I1217 20:02:29.247972  656592 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-372245/.minikube/certs/cert.pem (1123 bytes)
	I1217 20:02:29.248009  656592 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-372245/.minikube/certs/key.pem (1675 bytes)
	I1217 20:02:29.248111  656592 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-372245/.minikube/files/etc/ssl/certs/3757972.pem (1708 bytes)
	I1217 20:02:29.249030  656592 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1217 20:02:29.270146  656592 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1217 20:02:29.293014  656592 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1217 20:02:29.317841  656592 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1217 20:02:29.343243  656592 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/kubernetes-upgrade-322567/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1217 20:02:29.365410  656592 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/kubernetes-upgrade-322567/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1217 20:02:29.391284  656592 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/kubernetes-upgrade-322567/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1217 20:02:29.416212  656592 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/kubernetes-upgrade-322567/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1217 20:02:29.445476  656592 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/certs/375797.pem --> /usr/share/ca-certificates/375797.pem (1338 bytes)
	I1217 20:02:29.473949  656592 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/files/etc/ssl/certs/3757972.pem --> /usr/share/ca-certificates/3757972.pem (1708 bytes)
	I1217 20:02:29.499911  656592 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 20:02:29.526588  656592 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1217 20:02:29.544300  656592 ssh_runner.go:195] Run: openssl version
	I1217 20:02:29.552943  656592 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3757972.pem
	I1217 20:02:29.564309  656592 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3757972.pem /etc/ssl/certs/3757972.pem
	I1217 20:02:29.578058  656592 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3757972.pem
	I1217 20:02:29.584319  656592 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 17 19:32 /usr/share/ca-certificates/3757972.pem
	I1217 20:02:29.584484  656592 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3757972.pem
	I1217 20:02:29.637240  656592 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1217 20:02:29.647690  656592 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:02:29.659222  656592 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1217 20:02:29.670574  656592 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:02:29.677193  656592 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 19:24 /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:02:29.677282  656592 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:02:29.734672  656592 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1217 20:02:29.746328  656592 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/375797.pem
	I1217 20:02:29.755118  656592 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/375797.pem /etc/ssl/certs/375797.pem
	I1217 20:02:29.764857  656592 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/375797.pem
	I1217 20:02:29.770353  656592 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 17 19:32 /usr/share/ca-certificates/375797.pem
	I1217 20:02:29.770424  656592 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/375797.pem
	I1217 20:02:29.818607  656592 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1217 20:02:29.830111  656592 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1217 20:02:29.835851  656592 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1217 20:02:29.892635  656592 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1217 20:02:29.938671  656592 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1217 20:02:29.989324  656592 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1217 20:02:30.041445  656592 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1217 20:02:30.094798  656592 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1217 20:02:30.156935  656592 kubeadm.go:401] StartCluster: {Name:kubernetes-upgrade-322567 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:kubernetes-upgrade-322567 Namespace:default APIServerHAVIP: APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 20:02:30.157145  656592 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1217 20:02:30.157287  656592 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 20:02:30.196383  656592 cri.go:89] found id: "f20c3d774978aefefe3b71c9a753a8f36e6243b0c49bedf24d9b2878427f3ab5"
	I1217 20:02:30.196407  656592 cri.go:89] found id: "e2edbb87a291f9ffb08849a94dcfe691f52b14672463bddab26e7d6fca4c27c6"
	I1217 20:02:30.196413  656592 cri.go:89] found id: "6323ff6c5ccaddede5a650ed70a0ba4a7eef98458545ea75acd16def3f4683bd"
	I1217 20:02:30.196418  656592 cri.go:89] found id: "bae069ab95bb62217366b05e584c29c1ca4d8e18f5df479813be18651e40f4aa"
	I1217 20:02:30.196423  656592 cri.go:89] found id: "9edc4c7edfbcc481ef9463b4a5f05184f49baa725a76decb81ed842f3504d1ec"
	I1217 20:02:30.196428  656592 cri.go:89] found id: ""
	I1217 20:02:30.196468  656592 ssh_runner.go:195] Run: sudo runc list -f json
	W1217 20:02:30.213728  656592 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T20:02:30Z" level=error msg="open /run/runc: no such file or directory"
	I1217 20:02:30.213906  656592 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1217 20:02:30.227364  656592 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1217 20:02:30.227388  656592 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1217 20:02:30.227468  656592 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1217 20:02:30.238470  656592 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1217 20:02:30.239930  656592 kubeconfig.go:125] found "kubernetes-upgrade-322567" server: "https://192.168.76.2:8443"
	I1217 20:02:30.242278  656592 kapi.go:59] client config for kubernetes-upgrade-322567: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22186-372245/.minikube/profiles/kubernetes-upgrade-322567/client.crt", KeyFile:"/home/jenkins/minikube-integration/22186-372245/.minikube/profiles/kubernetes-upgrade-322567/client.key", CAFile:"/home/jenkins/minikube-integration/22186-372245/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(ni
l), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2817500), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1217 20:02:30.242822  656592 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1217 20:02:30.242850  656592 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1217 20:02:30.242858  656592 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1217 20:02:30.242863  656592 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1217 20:02:30.242868  656592 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1217 20:02:30.243362  656592 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1217 20:02:30.254889  656592 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1217 20:02:30.254930  656592 kubeadm.go:602] duration metric: took 27.534412ms to restartPrimaryControlPlane
	I1217 20:02:30.254942  656592 kubeadm.go:403] duration metric: took 98.022018ms to StartCluster
	I1217 20:02:30.254962  656592 settings.go:142] acquiring lock: {Name:mk01c60672ff2b8f50b037d6096a0a4590636830 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 20:02:30.255042  656592 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22186-372245/kubeconfig
	I1217 20:02:30.257476  656592 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-372245/kubeconfig: {Name:mkbe8926b9014d2af611aee93b1188b72880b6c1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 20:02:30.257814  656592 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1217 20:02:30.258006  656592 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1217 20:02:30.258142  656592 addons.go:70] Setting storage-provisioner=true in profile "kubernetes-upgrade-322567"
	I1217 20:02:30.258158  656592 addons.go:239] Setting addon storage-provisioner=true in "kubernetes-upgrade-322567"
	W1217 20:02:30.258167  656592 addons.go:248] addon storage-provisioner should already be in state true
	I1217 20:02:30.258205  656592 host.go:66] Checking if "kubernetes-upgrade-322567" exists ...
	I1217 20:02:30.258254  656592 config.go:182] Loaded profile config "kubernetes-upgrade-322567": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1217 20:02:30.258343  656592 addons.go:70] Setting default-storageclass=true in profile "kubernetes-upgrade-322567"
	I1217 20:02:30.258365  656592 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "kubernetes-upgrade-322567"
	I1217 20:02:30.258675  656592 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-322567 --format={{.State.Status}}
	I1217 20:02:30.258686  656592 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-322567 --format={{.State.Status}}
	I1217 20:02:30.259440  656592 out.go:179] * Verifying Kubernetes components...
	
	
	==> CRI-O <==
	Dec 17 20:02:26 newest-cni-420762 crio[526]: time="2025-12-17T20:02:26.270441957Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 20:02:26 newest-cni-420762 crio[526]: time="2025-12-17T20:02:26.276037666Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=1ef5815f-6dc1-4737-89e0-4f7565794f6a name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 17 20:02:26 newest-cni-420762 crio[526]: time="2025-12-17T20:02:26.284791593Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 17 20:02:26 newest-cni-420762 crio[526]: time="2025-12-17T20:02:26.285806681Z" level=info msg="Ran pod sandbox 3216dc3e6a2ffab19a4485b7e3451b4bb803337898b2e601336467a4d64bac11 with infra container: kube-system/kube-proxy-qpt8z/POD" id=1ef5815f-6dc1-4737-89e0-4f7565794f6a name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 17 20:02:26 newest-cni-420762 crio[526]: time="2025-12-17T20:02:26.287309709Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=46350fcc-2be5-44a8-97d1-bca85c5a9c1a name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 17 20:02:26 newest-cni-420762 crio[526]: time="2025-12-17T20:02:26.288529193Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-rc.1" id=7bbbf57b-58c3-457f-a3fd-d4c07624f24a name=/runtime.v1.ImageService/ImageStatus
	Dec 17 20:02:26 newest-cni-420762 crio[526]: time="2025-12-17T20:02:26.289479555Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-rc.1" id=f41d10db-4423-48b2-9e9f-3e040f2d8286 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 20:02:26 newest-cni-420762 crio[526]: time="2025-12-17T20:02:26.291054129Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 17 20:02:26 newest-cni-420762 crio[526]: time="2025-12-17T20:02:26.292619878Z" level=info msg="Ran pod sandbox 11f5fce47c19ca6451a94942c23de557969d2e81b6ce0a22a1481aa8a8fd7907 with infra container: kube-system/kindnet-2f44p/POD" id=46350fcc-2be5-44a8-97d1-bca85c5a9c1a name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 17 20:02:26 newest-cni-420762 crio[526]: time="2025-12-17T20:02:26.292898307Z" level=info msg="Creating container: kube-system/kube-proxy-qpt8z/kube-proxy" id=2bff2fc8-7eb1-4a19-89d3-1dfdc852b4d2 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 17 20:02:26 newest-cni-420762 crio[526]: time="2025-12-17T20:02:26.293030427Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 20:02:26 newest-cni-420762 crio[526]: time="2025-12-17T20:02:26.297385067Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88" id=3d4d8d3d-7cce-4c67-a93d-721cdd43046b name=/runtime.v1.ImageService/ImageStatus
	Dec 17 20:02:26 newest-cni-420762 crio[526]: time="2025-12-17T20:02:26.308765603Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 20:02:26 newest-cni-420762 crio[526]: time="2025-12-17T20:02:26.309016389Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88" id=e92227e7-4236-40af-9442-4400a1a3c750 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 20:02:26 newest-cni-420762 crio[526]: time="2025-12-17T20:02:26.309734647Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 20:02:26 newest-cni-420762 crio[526]: time="2025-12-17T20:02:26.311423792Z" level=info msg="Creating container: kube-system/kindnet-2f44p/kindnet-cni" id=cee7d116-70c7-469f-aca0-2f96d72ea6d8 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 17 20:02:26 newest-cni-420762 crio[526]: time="2025-12-17T20:02:26.311557412Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 20:02:26 newest-cni-420762 crio[526]: time="2025-12-17T20:02:26.317070017Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 20:02:26 newest-cni-420762 crio[526]: time="2025-12-17T20:02:26.317927263Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 20:02:26 newest-cni-420762 crio[526]: time="2025-12-17T20:02:26.351057249Z" level=info msg="Created container ff0ed23e8222fe8ec49653837dc558ec15a20e205a665b520caf64cdc5ae60dd: kube-system/kube-proxy-qpt8z/kube-proxy" id=2bff2fc8-7eb1-4a19-89d3-1dfdc852b4d2 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 17 20:02:26 newest-cni-420762 crio[526]: time="2025-12-17T20:02:26.352633779Z" level=info msg="Starting container: ff0ed23e8222fe8ec49653837dc558ec15a20e205a665b520caf64cdc5ae60dd" id=d1263143-a930-41f4-acae-6176ae03d055 name=/runtime.v1.RuntimeService/StartContainer
	Dec 17 20:02:26 newest-cni-420762 crio[526]: time="2025-12-17T20:02:26.353839326Z" level=info msg="Created container 20324d8b31869dc1f504d321c2a805ce4e571a579d66cb84c215eb02aa3b4f33: kube-system/kindnet-2f44p/kindnet-cni" id=cee7d116-70c7-469f-aca0-2f96d72ea6d8 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 17 20:02:26 newest-cni-420762 crio[526]: time="2025-12-17T20:02:26.354582458Z" level=info msg="Starting container: 20324d8b31869dc1f504d321c2a805ce4e571a579d66cb84c215eb02aa3b4f33" id=e841afc1-e026-477b-b6cf-f3d5b3586f2e name=/runtime.v1.RuntimeService/StartContainer
	Dec 17 20:02:26 newest-cni-420762 crio[526]: time="2025-12-17T20:02:26.356335688Z" level=info msg="Started container" PID=1056 containerID=ff0ed23e8222fe8ec49653837dc558ec15a20e205a665b520caf64cdc5ae60dd description=kube-system/kube-proxy-qpt8z/kube-proxy id=d1263143-a930-41f4-acae-6176ae03d055 name=/runtime.v1.RuntimeService/StartContainer sandboxID=3216dc3e6a2ffab19a4485b7e3451b4bb803337898b2e601336467a4d64bac11
	Dec 17 20:02:26 newest-cni-420762 crio[526]: time="2025-12-17T20:02:26.356466453Z" level=info msg="Started container" PID=1059 containerID=20324d8b31869dc1f504d321c2a805ce4e571a579d66cb84c215eb02aa3b4f33 description=kube-system/kindnet-2f44p/kindnet-cni id=e841afc1-e026-477b-b6cf-f3d5b3586f2e name=/runtime.v1.RuntimeService/StartContainer sandboxID=11f5fce47c19ca6451a94942c23de557969d2e81b6ce0a22a1481aa8a8fd7907
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	20324d8b31869       4921d7a6dffa922dd679732ba4797085c4f39e9a53bee8b6fdb1d463e8571251   4 seconds ago       Running             kindnet-cni               1                   11f5fce47c19c       kindnet-2f44p                               kube-system
	ff0ed23e8222f       af0321f3a4f388cfb978464739c323ebf891a7b0b50cdfd7179e92f141dad42a   4 seconds ago       Running             kube-proxy                1                   3216dc3e6a2ff       kube-proxy-qpt8z                            kube-system
	7d90e89ed2e6c       0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2   7 seconds ago       Running             etcd                      1                   4d291830c53f0       etcd-newest-cni-420762                      kube-system
	8544c715ea46d       5032a56602e1b9bd8856699701b6148aa1b9901d05b61f893df3b57f84aca614   7 seconds ago       Running             kube-controller-manager   1                   324146884dcfd       kube-controller-manager-newest-cni-420762   kube-system
	64b8df55df523       58865405a13bccac1d74bc3f446dddd22e6ef0d7ee8b52363c86dd31838976ce   7 seconds ago       Running             kube-apiserver            1                   3eff27b6e4242       kube-apiserver-newest-cni-420762            kube-system
	b7259506a4e5b       73f80cdc073daa4d501207f9e6dec1fa9eea5f27e8d347b8a0c4bad8811eecdc   7 seconds ago       Running             kube-scheduler            1                   a8b6d7b767cd1       kube-scheduler-newest-cni-420762            kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-420762
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=newest-cni-420762
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2e96f676eb7e96389e85fe0658a4ede4c4ba6924
	                    minikube.k8s.io/name=newest-cni-420762
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_17T20_01_49_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Dec 2025 20:01:45 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-420762
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Dec 2025 20:02:25 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Dec 2025 20:02:25 +0000   Wed, 17 Dec 2025 20:01:44 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Dec 2025 20:02:25 +0000   Wed, 17 Dec 2025 20:01:44 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Dec 2025 20:02:25 +0000   Wed, 17 Dec 2025 20:01:44 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Wed, 17 Dec 2025 20:02:25 +0000   Wed, 17 Dec 2025 20:01:44 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    newest-cni-420762
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 99cc213c06a11cdf07b2a4d26942818a
	  System UUID:                9a0da974-6b92-462d-a556-ee8264e627f2
	  Boot ID:                    832664c8-407a-4bff-a432-3bbc3f20421e
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.35.0-rc.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-420762                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         43s
	  kube-system                 kindnet-2f44p                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      38s
	  kube-system                 kube-apiserver-newest-cni-420762             250m (3%)     0 (0%)      0 (0%)           0 (0%)         43s
	  kube-system                 kube-controller-manager-newest-cni-420762    200m (2%)     0 (0%)      0 (0%)           0 (0%)         43s
	  kube-system                 kube-proxy-qpt8z                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         38s
	  kube-system                 kube-scheduler-newest-cni-420762             100m (1%)     0 (0%)      0 (0%)           0 (0%)         43s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  39s   node-controller  Node newest-cni-420762 event: Registered Node newest-cni-420762 in Controller
	  Normal  RegisteredNode  3s    node-controller  Node newest-cni-420762 event: Registered Node newest-cni-420762 in Controller
	
	
	==> dmesg <==
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 02 bf cf fd 8a f3 08 06
	[  +0.000372] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 46 d7 50 f9 50 96 08 06
	[Dec17 19:26] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000011] ll header: 00000000: 12 b8 6e 1b fb 93 de a2 46 23 bd 1e 08 00
	[  +1.015318] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 12 b8 6e 1b fb 93 de a2 46 23 bd 1e 08 00
	[  +1.023837] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 12 b8 6e 1b fb 93 de a2 46 23 bd 1e 08 00
	[  +1.023872] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 12 b8 6e 1b fb 93 de a2 46 23 bd 1e 08 00
	[  +1.023881] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 12 b8 6e 1b fb 93 de a2 46 23 bd 1e 08 00
	[  +1.023899] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 12 b8 6e 1b fb 93 de a2 46 23 bd 1e 08 00
	[  +2.047807] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: 12 b8 6e 1b fb 93 de a2 46 23 bd 1e 08 00
	[  +4.031540] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000022] ll header: 00000000: 12 b8 6e 1b fb 93 de a2 46 23 bd 1e 08 00
	[  +8.319118] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: 12 b8 6e 1b fb 93 de a2 46 23 bd 1e 08 00
	[ +16.382218] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 12 b8 6e 1b fb 93 de a2 46 23 bd 1e 08 00
	[Dec17 19:27] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 12 b8 6e 1b fb 93 de a2 46 23 bd 1e 08 00
	
	
	==> etcd [7d90e89ed2e6c5e28181da0ddfeb35b77f0b1a43e095576732addaa43e6437ba] <==
	{"level":"info","ts":"2025-12-17T20:02:23.582629Z","caller":"embed/etcd.go:640","msg":"serving peer traffic","address":"192.168.103.2:2380"}
	{"level":"info","ts":"2025-12-17T20:02:23.583456Z","caller":"embed/etcd.go:611","msg":"cmux::serve","address":"192.168.103.2:2380"}
	{"level":"info","ts":"2025-12-17T20:02:23.582656Z","caller":"embed/etcd.go:292","msg":"now serving peer/client/metrics","local-member-id":"f23060b075c4c089","initial-advertise-peer-urls":["https://192.168.103.2:2380"],"listen-peer-urls":["https://192.168.103.2:2380"],"advertise-client-urls":["https://192.168.103.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.103.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-12-17T20:02:23.582733Z","caller":"embed/etcd.go:890","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-12-17T20:02:23.582987Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1981","msg":"f23060b075c4c089 switched to configuration voters=(17451554867067011209)"}
	{"level":"info","ts":"2025-12-17T20:02:23.583715Z","caller":"membership/cluster.go:433","msg":"ignore already added member","cluster-id":"3336683c081d149d","local-member-id":"f23060b075c4c089","added-peer-id":"f23060b075c4c089","added-peer-peer-urls":["https://192.168.103.2:2380"],"added-peer-is-learner":false}
	{"level":"info","ts":"2025-12-17T20:02:23.583905Z","caller":"membership/cluster.go:674","msg":"updated cluster version","cluster-id":"3336683c081d149d","local-member-id":"f23060b075c4c089","from":"3.6","to":"3.6"}
	{"level":"info","ts":"2025-12-17T20:02:24.570189Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"f23060b075c4c089 is starting a new election at term 2"}
	{"level":"info","ts":"2025-12-17T20:02:24.570248Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"f23060b075c4c089 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-12-17T20:02:24.570330Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"f23060b075c4c089 received MsgPreVoteResp from f23060b075c4c089 at term 2"}
	{"level":"info","ts":"2025-12-17T20:02:24.570344Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"f23060b075c4c089 has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-17T20:02:24.570364Z","logger":"raft","caller":"v3@v3.6.0/raft.go:912","msg":"f23060b075c4c089 became candidate at term 3"}
	{"level":"info","ts":"2025-12-17T20:02:24.571293Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"f23060b075c4c089 received MsgVoteResp from f23060b075c4c089 at term 3"}
	{"level":"info","ts":"2025-12-17T20:02:24.571337Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"f23060b075c4c089 has received 1 MsgVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-17T20:02:24.571363Z","logger":"raft","caller":"v3@v3.6.0/raft.go:970","msg":"f23060b075c4c089 became leader at term 3"}
	{"level":"info","ts":"2025-12-17T20:02:24.571371Z","logger":"raft","caller":"v3@v3.6.0/node.go:370","msg":"raft.node: f23060b075c4c089 elected leader f23060b075c4c089 at term 3"}
	{"level":"info","ts":"2025-12-17T20:02:24.572796Z","caller":"etcdserver/server.go:1820","msg":"published local member to cluster through raft","local-member-id":"f23060b075c4c089","local-member-attributes":"{Name:newest-cni-420762 ClientURLs:[https://192.168.103.2:2379]}","cluster-id":"3336683c081d149d","publish-timeout":"7s"}
	{"level":"info","ts":"2025-12-17T20:02:24.572826Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-17T20:02:24.572850Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-17T20:02:24.572967Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-12-17T20:02:24.573036Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-12-17T20:02:24.574239Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-17T20:02:24.574395Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-17T20:02:24.579113Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.103.2:2379"}
	{"level":"info","ts":"2025-12-17T20:02:24.579110Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 20:02:31 up  1:45,  0 user,  load average: 5.57, 3.77, 2.57
	Linux newest-cni-420762 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [20324d8b31869dc1f504d321c2a805ce4e571a579d66cb84c215eb02aa3b4f33] <==
	I1217 20:02:26.515834       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1217 20:02:26.608373       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1217 20:02:26.608536       1 main.go:148] setting mtu 1500 for CNI 
	I1217 20:02:26.608567       1 main.go:178] kindnetd IP family: "ipv4"
	I1217 20:02:26.608604       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-17T20:02:26Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1217 20:02:26.809955       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1217 20:02:26.810008       1 controller.go:381] "Waiting for informer caches to sync"
	I1217 20:02:26.810022       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1217 20:02:26.810189       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1217 20:02:27.209120       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1217 20:02:27.210049       1 metrics.go:72] Registering metrics
	I1217 20:02:27.210219       1 controller.go:711] "Syncing nftables rules"
	
	
	==> kube-apiserver [64b8df55df5230a0b1d5727316ee323fddc47f3997c667cf27faf9dbec35288f] <==
	I1217 20:02:25.810066       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1217 20:02:25.810103       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1217 20:02:25.810337       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1217 20:02:25.810340       1 aggregator.go:187] initial CRD sync complete...
	I1217 20:02:25.810355       1 autoregister_controller.go:144] Starting autoregister controller
	I1217 20:02:25.810365       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1217 20:02:25.810371       1 cache.go:39] Caches are synced for autoregister controller
	I1217 20:02:25.812904       1 shared_informer.go:377] "Caches are synced"
	I1217 20:02:25.812985       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1217 20:02:25.824662       1 cidrallocator.go:302] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	E1217 20:02:25.829207       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1217 20:02:25.831024       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1217 20:02:25.863816       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1217 20:02:25.886033       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1217 20:02:26.163263       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1217 20:02:26.183321       1 controller.go:667] quota admission added evaluator for: namespaces
	I1217 20:02:26.220296       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1217 20:02:26.242804       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1217 20:02:26.251167       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1217 20:02:26.352374       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.102.242.224"}
	I1217 20:02:26.366436       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.103.23.40"}
	I1217 20:02:26.713939       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I1217 20:02:29.361172       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1217 20:02:29.511761       1 controller.go:667] quota admission added evaluator for: endpoints
	I1217 20:02:29.562138       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [8544c715ea46d06c40c805d6d1253f17f885eca03855c5b880ed720d0fff20f4] <==
	I1217 20:02:28.974453       1 shared_informer.go:377] "Caches are synced"
	I1217 20:02:28.973525       1 shared_informer.go:377] "Caches are synced"
	I1217 20:02:28.974442       1 shared_informer.go:377] "Caches are synced"
	I1217 20:02:28.975413       1 range_allocator.go:177] "Sending events to api server"
	I1217 20:02:28.975472       1 range_allocator.go:181] "Starting range CIDR allocator"
	I1217 20:02:28.975488       1 shared_informer.go:370] "Waiting for caches to sync"
	I1217 20:02:28.975495       1 shared_informer.go:377] "Caches are synced"
	I1217 20:02:28.976543       1 shared_informer.go:377] "Caches are synced"
	I1217 20:02:28.973452       1 shared_informer.go:377] "Caches are synced"
	I1217 20:02:28.975223       1 node_lifecycle_controller.go:1038] "Controller detected that all Nodes are not-Ready. Entering master disruption mode"
	I1217 20:02:28.976906       1 shared_informer.go:377] "Caches are synced"
	I1217 20:02:28.976984       1 shared_informer.go:377] "Caches are synced"
	I1217 20:02:28.977025       1 shared_informer.go:377] "Caches are synced"
	I1217 20:02:28.977154       1 shared_informer.go:377] "Caches are synced"
	I1217 20:02:28.977250       1 shared_informer.go:377] "Caches are synced"
	I1217 20:02:28.977289       1 shared_informer.go:377] "Caches are synced"
	I1217 20:02:28.978175       1 shared_informer.go:377] "Caches are synced"
	I1217 20:02:28.979023       1 shared_informer.go:377] "Caches are synced"
	I1217 20:02:28.979221       1 shared_informer.go:377] "Caches are synced"
	I1217 20:02:28.979224       1 shared_informer.go:377] "Caches are synced"
	I1217 20:02:29.011009       1 shared_informer.go:377] "Caches are synced"
	I1217 20:02:29.074443       1 shared_informer.go:377] "Caches are synced"
	I1217 20:02:29.075502       1 shared_informer.go:377] "Caches are synced"
	I1217 20:02:29.075519       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1217 20:02:29.075545       1 garbagecollector.go:169] "Proceeding to collect garbage"
	
	
	==> kube-proxy [ff0ed23e8222fe8ec49653837dc558ec15a20e205a665b520caf64cdc5ae60dd] <==
	I1217 20:02:26.401619       1 server_linux.go:53] "Using iptables proxy"
	I1217 20:02:26.460346       1 shared_informer.go:370] "Waiting for caches to sync"
	I1217 20:02:26.561137       1 shared_informer.go:377] "Caches are synced"
	I1217 20:02:26.561176       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.103.2"]
	E1217 20:02:26.561279       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1217 20:02:26.580463       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1217 20:02:26.580528       1 server_linux.go:136] "Using iptables Proxier"
	I1217 20:02:26.585966       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1217 20:02:26.586364       1 server.go:529] "Version info" version="v1.35.0-rc.1"
	I1217 20:02:26.586388       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1217 20:02:26.587846       1 config.go:106] "Starting endpoint slice config controller"
	I1217 20:02:26.587857       1 config.go:200] "Starting service config controller"
	I1217 20:02:26.587874       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1217 20:02:26.587880       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1217 20:02:26.587972       1 config.go:403] "Starting serviceCIDR config controller"
	I1217 20:02:26.587980       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1217 20:02:26.588036       1 config.go:309] "Starting node config controller"
	I1217 20:02:26.588043       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1217 20:02:26.588049       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1217 20:02:26.688116       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1217 20:02:26.688116       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1217 20:02:26.688188       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [b7259506a4e5b6bee4d005c6c0116262f2d16fb84d5378bc6f468fae3b7d2570] <==
	I1217 20:02:23.940157       1 serving.go:386] Generated self-signed cert in-memory
	W1217 20:02:25.735738       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1217 20:02:25.735796       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1217 20:02:25.735811       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1217 20:02:25.735822       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1217 20:02:25.795377       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0-rc.1"
	I1217 20:02:25.795442       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1217 20:02:25.800569       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1217 20:02:25.800619       1 shared_informer.go:370] "Waiting for caches to sync"
	I1217 20:02:25.800701       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1217 20:02:25.800753       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1217 20:02:25.900833       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Dec 17 20:02:25 newest-cni-420762 kubelet[679]: I1217 20:02:25.850292     679 kubelet_node_status.go:77] "Successfully registered node" node="newest-cni-420762"
	Dec 17 20:02:25 newest-cni-420762 kubelet[679]: I1217 20:02:25.850325     679 kuberuntime_manager.go:2062] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Dec 17 20:02:25 newest-cni-420762 kubelet[679]: I1217 20:02:25.853587     679 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Dec 17 20:02:25 newest-cni-420762 kubelet[679]: E1217 20:02:25.891526     679 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-420762\" already exists" pod="kube-system/etcd-newest-cni-420762"
	Dec 17 20:02:25 newest-cni-420762 kubelet[679]: I1217 20:02:25.891564     679 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-420762"
	Dec 17 20:02:25 newest-cni-420762 kubelet[679]: E1217 20:02:25.899024     679 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-420762\" already exists" pod="kube-system/kube-apiserver-newest-cni-420762"
	Dec 17 20:02:25 newest-cni-420762 kubelet[679]: I1217 20:02:25.899066     679 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-newest-cni-420762"
	Dec 17 20:02:25 newest-cni-420762 kubelet[679]: E1217 20:02:25.908220     679 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-420762\" already exists" pod="kube-system/kube-controller-manager-newest-cni-420762"
	Dec 17 20:02:25 newest-cni-420762 kubelet[679]: I1217 20:02:25.908435     679 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-420762"
	Dec 17 20:02:25 newest-cni-420762 kubelet[679]: E1217 20:02:25.918702     679 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-420762\" already exists" pod="kube-system/kube-scheduler-newest-cni-420762"
	Dec 17 20:02:25 newest-cni-420762 kubelet[679]: I1217 20:02:25.955213     679 apiserver.go:52] "Watching apiserver"
	Dec 17 20:02:25 newest-cni-420762 kubelet[679]: E1217 20:02:25.962711     679 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-newest-cni-420762" containerName="kube-controller-manager"
	Dec 17 20:02:26 newest-cni-420762 kubelet[679]: E1217 20:02:26.007936     679 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-newest-cni-420762" containerName="kube-scheduler"
	Dec 17 20:02:26 newest-cni-420762 kubelet[679]: E1217 20:02:26.008323     679 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-newest-cni-420762" containerName="kube-apiserver"
	Dec 17 20:02:26 newest-cni-420762 kubelet[679]: E1217 20:02:26.008399     679 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-newest-cni-420762" containerName="etcd"
	Dec 17 20:02:26 newest-cni-420762 kubelet[679]: I1217 20:02:26.060327     679 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Dec 17 20:02:26 newest-cni-420762 kubelet[679]: I1217 20:02:26.159974     679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5bbdb455-62b1-48ac-a4d9-b930a3dc010f-xtables-lock\") pod \"kube-proxy-qpt8z\" (UID: \"5bbdb455-62b1-48ac-a4d9-b930a3dc010f\") " pod="kube-system/kube-proxy-qpt8z"
	Dec 17 20:02:26 newest-cni-420762 kubelet[679]: I1217 20:02:26.160032     679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5bbdb455-62b1-48ac-a4d9-b930a3dc010f-lib-modules\") pod \"kube-proxy-qpt8z\" (UID: \"5bbdb455-62b1-48ac-a4d9-b930a3dc010f\") " pod="kube-system/kube-proxy-qpt8z"
	Dec 17 20:02:26 newest-cni-420762 kubelet[679]: I1217 20:02:26.160101     679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/1888eaab-a42f-4c23-87e4-6c698a41af87-cni-cfg\") pod \"kindnet-2f44p\" (UID: \"1888eaab-a42f-4c23-87e4-6c698a41af87\") " pod="kube-system/kindnet-2f44p"
	Dec 17 20:02:26 newest-cni-420762 kubelet[679]: I1217 20:02:26.160126     679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1888eaab-a42f-4c23-87e4-6c698a41af87-xtables-lock\") pod \"kindnet-2f44p\" (UID: \"1888eaab-a42f-4c23-87e4-6c698a41af87\") " pod="kube-system/kindnet-2f44p"
	Dec 17 20:02:26 newest-cni-420762 kubelet[679]: I1217 20:02:26.160140     679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1888eaab-a42f-4c23-87e4-6c698a41af87-lib-modules\") pod \"kindnet-2f44p\" (UID: \"1888eaab-a42f-4c23-87e4-6c698a41af87\") " pod="kube-system/kindnet-2f44p"
	Dec 17 20:02:27 newest-cni-420762 kubelet[679]: E1217 20:02:27.014315     679 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-newest-cni-420762" containerName="kube-scheduler"
	Dec 17 20:02:28 newest-cni-420762 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 17 20:02:28 newest-cni-420762 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 17 20:02:28 newest-cni-420762 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-420762 -n newest-cni-420762
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-420762 -n newest-cni-420762: exit status 2 (371.069497ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context newest-cni-420762 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:281: non-running pods: coredns-7d764666f9-jsv2j storage-provisioner dashboard-metrics-scraper-867fb5f87b-ztgjb kubernetes-dashboard-b84665fb8-d4677
helpers_test.go:283: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:286: (dbg) Run:  kubectl --context newest-cni-420762 describe pod coredns-7d764666f9-jsv2j storage-provisioner dashboard-metrics-scraper-867fb5f87b-ztgjb kubernetes-dashboard-b84665fb8-d4677
helpers_test.go:286: (dbg) Non-zero exit: kubectl --context newest-cni-420762 describe pod coredns-7d764666f9-jsv2j storage-provisioner dashboard-metrics-scraper-867fb5f87b-ztgjb kubernetes-dashboard-b84665fb8-d4677: exit status 1 (78.647413ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-7d764666f9-jsv2j" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-867fb5f87b-ztgjb" not found
	Error from server (NotFound): pods "kubernetes-dashboard-b84665fb8-d4677" not found

                                                
                                                
** /stderr **
helpers_test.go:288: kubectl --context newest-cni-420762 describe pod coredns-7d764666f9-jsv2j storage-provisioner dashboard-metrics-scraper-867fb5f87b-ztgjb kubernetes-dashboard-b84665fb8-d4677: exit status 1
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect newest-cni-420762
helpers_test.go:244: (dbg) docker inspect newest-cni-420762:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "f638a198c1fac512e27e9dc5b5e8951d602e997655ed0515839658576a7bc882",
	        "Created": "2025-12-17T20:01:35.486713573Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 654236,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-17T20:02:16.487717516Z",
	            "FinishedAt": "2025-12-17T20:02:15.073093508Z"
	        },
	        "Image": "sha256:e3abeb065413b7566dd42e98e204ab3ad174790743f1f5cd427036c11b49d7f1",
	        "ResolvConfPath": "/var/lib/docker/containers/f638a198c1fac512e27e9dc5b5e8951d602e997655ed0515839658576a7bc882/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/f638a198c1fac512e27e9dc5b5e8951d602e997655ed0515839658576a7bc882/hostname",
	        "HostsPath": "/var/lib/docker/containers/f638a198c1fac512e27e9dc5b5e8951d602e997655ed0515839658576a7bc882/hosts",
	        "LogPath": "/var/lib/docker/containers/f638a198c1fac512e27e9dc5b5e8951d602e997655ed0515839658576a7bc882/f638a198c1fac512e27e9dc5b5e8951d602e997655ed0515839658576a7bc882-json.log",
	        "Name": "/newest-cni-420762",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-420762:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "newest-cni-420762",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "f638a198c1fac512e27e9dc5b5e8951d602e997655ed0515839658576a7bc882",
	                "LowerDir": "/var/lib/docker/overlay2/1752dfd752ba541c00ea437bb3a181f09772c91428c90506c33b812d67f94809-init/diff:/var/lib/docker/overlay2/29727d664a8119dcd8d22d923cfdfa7d86f99088879bf2a113d907b51116eb38/diff",
	                "MergedDir": "/var/lib/docker/overlay2/1752dfd752ba541c00ea437bb3a181f09772c91428c90506c33b812d67f94809/merged",
	                "UpperDir": "/var/lib/docker/overlay2/1752dfd752ba541c00ea437bb3a181f09772c91428c90506c33b812d67f94809/diff",
	                "WorkDir": "/var/lib/docker/overlay2/1752dfd752ba541c00ea437bb3a181f09772c91428c90506c33b812d67f94809/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-420762",
	                "Source": "/var/lib/docker/volumes/newest-cni-420762/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-420762",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-420762",
	                "name.minikube.sigs.k8s.io": "newest-cni-420762",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "ec9277c5a29195077d1885667b8da9a02c93c68737d286cd68b39606620e2984",
	            "SandboxKey": "/var/run/docker/netns/ec9277c5a291",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33473"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33474"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33477"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33475"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33476"
	                    }
	                ]
	            },
	            "Networks": {
	                "newest-cni-420762": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "c599555d4217815d05b632e5621ed20805e2fb5e529f70229a8fb07f9886d72c",
	                    "EndpointID": "b5e97611e782f7026ead0b051aa30ebcf50b984d4d1038df4a213df990e38e01",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "MacAddress": "02:07:5d:50:b8:69",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-420762",
	                        "f638a198c1fa"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-420762 -n newest-cni-420762
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-420762 -n newest-cni-420762: exit status 2 (340.09816ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-420762 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p newest-cni-420762 logs -n 25: (1.155214821s)
helpers_test.go:261: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────
────────────┐
	│ COMMAND │                                                                                                                        ARGS                                                                                                                        │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────
────────────┤
	│ image   │ no-preload-832842 image list --format=json                                                                                                                                                                                                         │ no-preload-832842            │ jenkins │ v1.37.0 │ 17 Dec 25 20:01 UTC │ 17 Dec 25 20:01 UTC │
	│ pause   │ -p no-preload-832842 --alsologtostderr -v=1                                                                                                                                                                                                        │ no-preload-832842            │ jenkins │ v1.37.0 │ 17 Dec 25 20:01 UTC │                     │
	│ image   │ old-k8s-version-894575 image list --format=json                                                                                                                                                                                                    │ old-k8s-version-894575       │ jenkins │ v1.37.0 │ 17 Dec 25 20:01 UTC │ 17 Dec 25 20:01 UTC │
	│ pause   │ -p old-k8s-version-894575 --alsologtostderr -v=1                                                                                                                                                                                                   │ old-k8s-version-894575       │ jenkins │ v1.37.0 │ 17 Dec 25 20:01 UTC │                     │
	│ delete  │ -p no-preload-832842                                                                                                                                                                                                                               │ no-preload-832842            │ jenkins │ v1.37.0 │ 17 Dec 25 20:01 UTC │ 17 Dec 25 20:01 UTC │
	│ delete  │ -p old-k8s-version-894575                                                                                                                                                                                                                          │ old-k8s-version-894575       │ jenkins │ v1.37.0 │ 17 Dec 25 20:01 UTC │ 17 Dec 25 20:01 UTC │
	│ delete  │ -p no-preload-832842                                                                                                                                                                                                                               │ no-preload-832842            │ jenkins │ v1.37.0 │ 17 Dec 25 20:01 UTC │ 17 Dec 25 20:01 UTC │
	│ start   │ -p newest-cni-420762 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1 │ newest-cni-420762            │ jenkins │ v1.37.0 │ 17 Dec 25 20:01 UTC │ 17 Dec 25 20:01 UTC │
	│ delete  │ -p old-k8s-version-894575                                                                                                                                                                                                                          │ old-k8s-version-894575       │ jenkins │ v1.37.0 │ 17 Dec 25 20:01 UTC │ 17 Dec 25 20:01 UTC │
	│ start   │ -p embed-certs-147021 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3                                                                                             │ embed-certs-147021           │ jenkins │ v1.37.0 │ 17 Dec 25 20:01 UTC │ 17 Dec 25 20:02 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-759234 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                 │ default-k8s-diff-port-759234 │ jenkins │ v1.37.0 │ 17 Dec 25 20:01 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-759234 --alsologtostderr -v=3                                                                                                                                                                                             │ default-k8s-diff-port-759234 │ jenkins │ v1.37.0 │ 17 Dec 25 20:01 UTC │ 17 Dec 25 20:01 UTC │
	│ addons  │ enable metrics-server -p newest-cni-420762 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                            │ newest-cni-420762            │ jenkins │ v1.37.0 │ 17 Dec 25 20:01 UTC │                     │
	│ addons  │ enable dashboard -p default-k8s-diff-port-759234 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                            │ default-k8s-diff-port-759234 │ jenkins │ v1.37.0 │ 17 Dec 25 20:01 UTC │ 17 Dec 25 20:01 UTC │
	│ start   │ -p default-k8s-diff-port-759234 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3                                                                           │ default-k8s-diff-port-759234 │ jenkins │ v1.37.0 │ 17 Dec 25 20:01 UTC │                     │
	│ stop    │ -p newest-cni-420762 --alsologtostderr -v=3                                                                                                                                                                                                        │ newest-cni-420762            │ jenkins │ v1.37.0 │ 17 Dec 25 20:01 UTC │ 17 Dec 25 20:02 UTC │
	│ addons  │ enable dashboard -p newest-cni-420762 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                       │ newest-cni-420762            │ jenkins │ v1.37.0 │ 17 Dec 25 20:02 UTC │ 17 Dec 25 20:02 UTC │
	│ start   │ -p newest-cni-420762 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1 │ newest-cni-420762            │ jenkins │ v1.37.0 │ 17 Dec 25 20:02 UTC │ 17 Dec 25 20:02 UTC │
	│ addons  │ enable metrics-server -p embed-certs-147021 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                           │ embed-certs-147021           │ jenkins │ v1.37.0 │ 17 Dec 25 20:02 UTC │                     │
	│ start   │ -p kubernetes-upgrade-322567 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio                                                                                                                                  │ kubernetes-upgrade-322567    │ jenkins │ v1.37.0 │ 17 Dec 25 20:02 UTC │                     │
	│ start   │ -p kubernetes-upgrade-322567 --memory=3072 --kubernetes-version=v1.35.0-rc.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-322567    │ jenkins │ v1.37.0 │ 17 Dec 25 20:02 UTC │ 17 Dec 25 20:02 UTC │
	│ stop    │ -p embed-certs-147021 --alsologtostderr -v=3                                                                                                                                                                                                       │ embed-certs-147021           │ jenkins │ v1.37.0 │ 17 Dec 25 20:02 UTC │                     │
	│ image   │ newest-cni-420762 image list --format=json                                                                                                                                                                                                         │ newest-cni-420762            │ jenkins │ v1.37.0 │ 17 Dec 25 20:02 UTC │ 17 Dec 25 20:02 UTC │
	│ pause   │ -p newest-cni-420762 --alsologtostderr -v=1                                                                                                                                                                                                        │ newest-cni-420762            │ jenkins │ v1.37.0 │ 17 Dec 25 20:02 UTC │                     │
	│ delete  │ -p kubernetes-upgrade-322567                                                                                                                                                                                                                       │ kubernetes-upgrade-322567    │ jenkins │ v1.37.0 │ 17 Dec 25 20:02 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────
────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/17 20:02:25
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1217 20:02:25.111745  656592 out.go:360] Setting OutFile to fd 1 ...
	I1217 20:02:25.112015  656592 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 20:02:25.112026  656592 out.go:374] Setting ErrFile to fd 2...
	I1217 20:02:25.112031  656592 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 20:02:25.112260  656592 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22186-372245/.minikube/bin
	I1217 20:02:25.112764  656592 out.go:368] Setting JSON to false
	I1217 20:02:25.114229  656592 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":6296,"bootTime":1765995449,"procs":364,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1217 20:02:25.114300  656592 start.go:143] virtualization: kvm guest
	I1217 20:02:25.116958  656592 out.go:179] * [kubernetes-upgrade-322567] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1217 20:02:25.118368  656592 out.go:179]   - MINIKUBE_LOCATION=22186
	I1217 20:02:25.118387  656592 notify.go:221] Checking for updates...
	I1217 20:02:25.120938  656592 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 20:02:25.123211  656592 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22186-372245/kubeconfig
	I1217 20:02:25.124640  656592 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22186-372245/.minikube
	I1217 20:02:25.126369  656592 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1217 20:02:25.127858  656592 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 20:02:25.129625  656592 config.go:182] Loaded profile config "kubernetes-upgrade-322567": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1217 20:02:25.130470  656592 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 20:02:25.183444  656592 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1217 20:02:25.183615  656592 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 20:02:25.263267  656592 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:78 OomKillDisable:false NGoroutines:86 SystemTime:2025-12-17 20:02:25.25102943 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1217 20:02:25.263595  656592 docker.go:319] overlay module found
	I1217 20:02:25.265394  656592 out.go:179] * Using the docker driver based on existing profile
	I1217 20:02:25.266659  656592 start.go:309] selected driver: docker
	I1217 20:02:25.266682  656592 start.go:927] validating driver "docker" against &{Name:kubernetes-upgrade-322567 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:kubernetes-upgrade-322567 Namespace:default APISer
verHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cust
omQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 20:02:25.266800  656592 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 20:02:25.267678  656592 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 20:02:25.345311  656592 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:78 OomKillDisable:false NGoroutines:86 SystemTime:2025-12-17 20:02:25.332645151 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1217 20:02:25.345593  656592 cni.go:84] Creating CNI manager for ""
	I1217 20:02:25.345651  656592 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1217 20:02:25.345679  656592 start.go:353] cluster config:
	{Name:kubernetes-upgrade-322567 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:kubernetes-upgrade-322567 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:c
luster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSoc
k: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 20:02:25.347071  656592 out.go:179] * Starting "kubernetes-upgrade-322567" primary control-plane node in "kubernetes-upgrade-322567" cluster
	I1217 20:02:25.348175  656592 cache.go:134] Beginning downloading kic base image for docker with crio
	I1217 20:02:25.349380  656592 out.go:179] * Pulling base image v0.0.48-1765966054-22186 ...
	I1217 20:02:25.350410  656592 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime crio
	I1217 20:02:25.350448  656592 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22186-372245/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-cri-o-overlay-amd64.tar.lz4
	I1217 20:02:25.350458  656592 cache.go:65] Caching tarball of preloaded images
	I1217 20:02:25.350472  656592 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 in local docker daemon
	I1217 20:02:25.350572  656592 preload.go:238] Found /home/jenkins/minikube-integration/22186-372245/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1217 20:02:25.350586  656592 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-rc.1 on crio
	I1217 20:02:25.350686  656592 profile.go:143] Saving config to /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/kubernetes-upgrade-322567/config.json ...
	I1217 20:02:25.378573  656592 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 in local docker daemon, skipping pull
	I1217 20:02:25.378609  656592 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 exists in daemon, skipping load
	I1217 20:02:25.378639  656592 cache.go:243] Successfully downloaded all kic artifacts
	I1217 20:02:25.378681  656592 start.go:360] acquireMachinesLock for kubernetes-upgrade-322567: {Name:mk564afb625ef099e3d779cfe3fa06e9fed195e9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 20:02:25.378754  656592 start.go:364] duration metric: took 50.502µs to acquireMachinesLock for "kubernetes-upgrade-322567"
	I1217 20:02:25.378776  656592 start.go:96] Skipping create...Using existing machine configuration
	I1217 20:02:25.378783  656592 fix.go:54] fixHost starting: 
	I1217 20:02:25.379132  656592 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-322567 --format={{.State.Status}}
	I1217 20:02:25.403772  656592 fix.go:112] recreateIfNeeded on kubernetes-upgrade-322567: state=Running err=<nil>
	W1217 20:02:25.403799  656592 fix.go:138] unexpected machine state, will restart: <nil>
	W1217 20:02:21.822105  649079 pod_ready.go:104] pod "coredns-66bc5c9577-lv4jd" is not "Ready", error: <nil>
	W1217 20:02:23.828751  649079 pod_ready.go:104] pod "coredns-66bc5c9577-lv4jd" is not "Ready", error: <nil>
	I1217 20:02:23.839611  654009 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1217 20:02:23.839634  654009 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1217 20:02:23.839694  654009 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-420762
	I1217 20:02:23.879811  654009 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33473 SSHKeyPath:/home/jenkins/minikube-integration/22186-372245/.minikube/machines/newest-cni-420762/id_rsa Username:docker}
	I1217 20:02:23.880004  654009 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1217 20:02:23.880022  654009 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1217 20:02:23.880259  654009 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-420762
	I1217 20:02:23.887337  654009 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33473 SSHKeyPath:/home/jenkins/minikube-integration/22186-372245/.minikube/machines/newest-cni-420762/id_rsa Username:docker}
	I1217 20:02:23.909952  654009 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33473 SSHKeyPath:/home/jenkins/minikube-integration/22186-372245/.minikube/machines/newest-cni-420762/id_rsa Username:docker}
	I1217 20:02:23.977280  654009 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 20:02:23.996392  654009 api_server.go:52] waiting for apiserver process to appear ...
	I1217 20:02:23.996648  654009 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:02:24.012014  654009 api_server.go:72] duration metric: took 205.174874ms to wait for apiserver process to appear ...
	I1217 20:02:24.012046  654009 api_server.go:88] waiting for apiserver healthz status ...
	I1217 20:02:24.012124  654009 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1217 20:02:24.018737  654009 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 20:02:24.034420  654009 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1217 20:02:24.034449  654009 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1217 20:02:24.048004  654009 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1217 20:02:24.055283  654009 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1217 20:02:24.055314  654009 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1217 20:02:24.075200  654009 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1217 20:02:24.075229  654009 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1217 20:02:24.103355  654009 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1217 20:02:24.103383  654009 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1217 20:02:24.121143  654009 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1217 20:02:24.121170  654009 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1217 20:02:24.140025  654009 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1217 20:02:24.140053  654009 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1217 20:02:24.155820  654009 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1217 20:02:24.155842  654009 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1217 20:02:24.173804  654009 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1217 20:02:24.173832  654009 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1217 20:02:24.190646  654009 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1217 20:02:24.190677  654009 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1217 20:02:24.211979  654009 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1217 20:02:25.736027  654009 api_server.go:279] https://192.168.103.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1217 20:02:25.736087  654009 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1217 20:02:25.736105  654009 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1217 20:02:25.748234  654009 api_server.go:279] https://192.168.103.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\": RBAC: clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found","reason":"Forbidden","details":{},"code":403}
	W1217 20:02:25.748275  654009 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\": RBAC: clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found","reason":"Forbidden","details":{},"code":403}
	I1217 20:02:26.013072  654009 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1217 20:02:26.018374  654009 api_server.go:279] https://192.168.103.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1217 20:02:26.018411  654009 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1217 20:02:26.452828  654009 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.434052873s)
	I1217 20:02:26.452898  654009 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.404859516s)
	I1217 20:02:26.453047  654009 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.241028978s)
	I1217 20:02:26.455161  654009 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-420762 addons enable metrics-server
	
	I1217 20:02:26.468338  654009 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1217 20:02:26.469958  654009 addons.go:530] duration metric: took 2.663043743s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1217 20:02:26.513119  654009 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1217 20:02:26.517769  654009 api_server.go:279] https://192.168.103.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1217 20:02:26.517802  654009 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1217 20:02:27.012177  654009 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1217 20:02:27.017530  654009 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1217 20:02:27.018608  654009 api_server.go:141] control plane version: v1.35.0-rc.1
	I1217 20:02:27.018656  654009 api_server.go:131] duration metric: took 3.006587162s to wait for apiserver health ...
	I1217 20:02:27.018669  654009 system_pods.go:43] waiting for kube-system pods to appear ...
	I1217 20:02:27.022470  654009 system_pods.go:59] 8 kube-system pods found
	I1217 20:02:27.022502  654009 system_pods.go:61] "coredns-7d764666f9-jsv2j" [262483f9-bcc1-4054-871a-16cfad4a4abd] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1217 20:02:27.022530  654009 system_pods.go:61] "etcd-newest-cni-420762" [70516caa-a886-4a08-95db-bc22f8c6a7d3] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1217 20:02:27.022548  654009 system_pods.go:61] "kindnet-2f44p" [1888eaab-a42f-4c23-87e4-6c698a41af87] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1217 20:02:27.022556  654009 system_pods.go:61] "kube-apiserver-newest-cni-420762" [8fa67084-5bff-41b5-bdfa-65290314913d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1217 20:02:27.022564  654009 system_pods.go:61] "kube-controller-manager-newest-cni-420762" [732ac716-843a-468b-8ed7-4b94e35445d0] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1217 20:02:27.022569  654009 system_pods.go:61] "kube-proxy-qpt8z" [5bbdb455-62b1-48ac-a4d9-b930a3dc010f] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1217 20:02:27.022574  654009 system_pods.go:61] "kube-scheduler-newest-cni-420762" [ae106497-db01-4129-ad94-7e637ad3278c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1217 20:02:27.022579  654009 system_pods.go:61] "storage-provisioner" [4d3bd70b-556b-4c14-a933-2636b424730f] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1217 20:02:27.022587  654009 system_pods.go:74] duration metric: took 3.905193ms to wait for pod list to return data ...
	I1217 20:02:27.022598  654009 default_sa.go:34] waiting for default service account to be created ...
	I1217 20:02:27.030693  654009 default_sa.go:45] found service account: "default"
	I1217 20:02:27.030723  654009 default_sa.go:55] duration metric: took 8.117112ms for default service account to be created ...
	I1217 20:02:27.030754  654009 kubeadm.go:587] duration metric: took 3.223906598s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1217 20:02:27.030776  654009 node_conditions.go:102] verifying NodePressure condition ...
	I1217 20:02:27.033915  654009 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1217 20:02:27.033943  654009 node_conditions.go:123] node cpu capacity is 8
	I1217 20:02:27.033959  654009 node_conditions.go:105] duration metric: took 3.177321ms to run NodePressure ...
	I1217 20:02:27.033973  654009 start.go:242] waiting for startup goroutines ...
	I1217 20:02:27.033983  654009 start.go:247] waiting for cluster config update ...
	I1217 20:02:27.034002  654009 start.go:256] writing updated cluster config ...
	I1217 20:02:27.034364  654009 ssh_runner.go:195] Run: rm -f paused
	I1217 20:02:27.087374  654009 start.go:625] kubectl: 1.35.0, cluster: 1.35.0-rc.1 (minor skew: 0)
	I1217 20:02:27.093218  654009 out.go:179] * Done! kubectl is now configured to use "newest-cni-420762" cluster and "default" namespace by default
	I1217 20:02:25.405565  656592 out.go:252] * Updating the running docker "kubernetes-upgrade-322567" container ...
	I1217 20:02:25.405598  656592 machine.go:94] provisionDockerMachine start ...
	I1217 20:02:25.405679  656592 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-322567
	I1217 20:02:25.428542  656592 main.go:143] libmachine: Using SSH client type: native
	I1217 20:02:25.428878  656592 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33408 <nil> <nil>}
	I1217 20:02:25.428899  656592 main.go:143] libmachine: About to run SSH command:
	hostname
	I1217 20:02:25.594013  656592 main.go:143] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-322567
	
	I1217 20:02:25.594038  656592 ubuntu.go:182] provisioning hostname "kubernetes-upgrade-322567"
	I1217 20:02:25.594118  656592 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-322567
	I1217 20:02:25.622236  656592 main.go:143] libmachine: Using SSH client type: native
	I1217 20:02:25.622582  656592 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33408 <nil> <nil>}
	I1217 20:02:25.622601  656592 main.go:143] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-322567 && echo "kubernetes-upgrade-322567" | sudo tee /etc/hostname
	I1217 20:02:25.815712  656592 main.go:143] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-322567
	
	I1217 20:02:25.815825  656592 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-322567
	I1217 20:02:25.849231  656592 main.go:143] libmachine: Using SSH client type: native
	I1217 20:02:25.849797  656592 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33408 <nil> <nil>}
	I1217 20:02:25.849882  656592 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-322567' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-322567/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-322567' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1217 20:02:26.025893  656592 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1217 20:02:26.025944  656592 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22186-372245/.minikube CaCertPath:/home/jenkins/minikube-integration/22186-372245/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22186-372245/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22186-372245/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22186-372245/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22186-372245/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22186-372245/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22186-372245/.minikube}
	I1217 20:02:26.025973  656592 ubuntu.go:190] setting up certificates
	I1217 20:02:26.025987  656592 provision.go:84] configureAuth start
	I1217 20:02:26.026053  656592 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-322567
	I1217 20:02:26.052308  656592 provision.go:143] copyHostCerts
	I1217 20:02:26.052397  656592 exec_runner.go:144] found /home/jenkins/minikube-integration/22186-372245/.minikube/ca.pem, removing ...
	I1217 20:02:26.052421  656592 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22186-372245/.minikube/ca.pem
	I1217 20:02:26.052512  656592 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22186-372245/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22186-372245/.minikube/ca.pem (1082 bytes)
	I1217 20:02:26.052651  656592 exec_runner.go:144] found /home/jenkins/minikube-integration/22186-372245/.minikube/cert.pem, removing ...
	I1217 20:02:26.052666  656592 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22186-372245/.minikube/cert.pem
	I1217 20:02:26.052709  656592 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22186-372245/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22186-372245/.minikube/cert.pem (1123 bytes)
	I1217 20:02:26.052804  656592 exec_runner.go:144] found /home/jenkins/minikube-integration/22186-372245/.minikube/key.pem, removing ...
	I1217 20:02:26.052820  656592 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22186-372245/.minikube/key.pem
	I1217 20:02:26.052859  656592 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22186-372245/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22186-372245/.minikube/key.pem (1675 bytes)
	I1217 20:02:26.052959  656592 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22186-372245/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22186-372245/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22186-372245/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-322567 san=[127.0.0.1 192.168.76.2 kubernetes-upgrade-322567 localhost minikube]
	I1217 20:02:26.240205  656592 provision.go:177] copyRemoteCerts
	I1217 20:02:26.240276  656592 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1217 20:02:26.240321  656592 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-322567
	I1217 20:02:26.265735  656592 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33408 SSHKeyPath:/home/jenkins/minikube-integration/22186-372245/.minikube/machines/kubernetes-upgrade-322567/id_rsa Username:docker}
	I1217 20:02:26.391513  656592 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1217 20:02:26.413011  656592 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1217 20:02:26.433987  656592 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1217 20:02:26.454028  656592 provision.go:87] duration metric: took 428.028465ms to configureAuth
	I1217 20:02:26.454056  656592 ubuntu.go:206] setting minikube options for container-runtime
	I1217 20:02:26.454354  656592 config.go:182] Loaded profile config "kubernetes-upgrade-322567": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1217 20:02:26.454467  656592 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-322567
	I1217 20:02:26.476177  656592 main.go:143] libmachine: Using SSH client type: native
	I1217 20:02:26.476419  656592 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33408 <nil> <nil>}
	I1217 20:02:26.476441  656592 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1217 20:02:27.083678  656592 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1217 20:02:27.083712  656592 machine.go:97] duration metric: took 1.678106915s to provisionDockerMachine
	I1217 20:02:27.083726  656592 start.go:293] postStartSetup for "kubernetes-upgrade-322567" (driver="docker")
	I1217 20:02:27.083756  656592 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1217 20:02:27.083834  656592 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1217 20:02:27.083882  656592 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-322567
	I1217 20:02:27.105029  656592 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33408 SSHKeyPath:/home/jenkins/minikube-integration/22186-372245/.minikube/machines/kubernetes-upgrade-322567/id_rsa Username:docker}
	I1217 20:02:27.216380  656592 ssh_runner.go:195] Run: cat /etc/os-release
	I1217 20:02:27.220903  656592 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1217 20:02:27.220933  656592 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1217 20:02:27.220945  656592 filesync.go:126] Scanning /home/jenkins/minikube-integration/22186-372245/.minikube/addons for local assets ...
	I1217 20:02:27.220991  656592 filesync.go:126] Scanning /home/jenkins/minikube-integration/22186-372245/.minikube/files for local assets ...
	I1217 20:02:27.221070  656592 filesync.go:149] local asset: /home/jenkins/minikube-integration/22186-372245/.minikube/files/etc/ssl/certs/3757972.pem -> 3757972.pem in /etc/ssl/certs
	I1217 20:02:27.221220  656592 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1217 20:02:27.230137  656592 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/files/etc/ssl/certs/3757972.pem --> /etc/ssl/certs/3757972.pem (1708 bytes)
	I1217 20:02:27.250752  656592 start.go:296] duration metric: took 167.002454ms for postStartSetup
	I1217 20:02:27.250840  656592 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 20:02:27.250902  656592 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-322567
	I1217 20:02:27.272239  656592 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33408 SSHKeyPath:/home/jenkins/minikube-integration/22186-372245/.minikube/machines/kubernetes-upgrade-322567/id_rsa Username:docker}
	I1217 20:02:27.385314  656592 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1217 20:02:27.399439  656592 fix.go:56] duration metric: took 2.020645251s for fixHost
	I1217 20:02:27.399474  656592 start.go:83] releasing machines lock for "kubernetes-upgrade-322567", held for 2.020707503s
	I1217 20:02:27.399569  656592 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-322567
	I1217 20:02:27.434025  656592 ssh_runner.go:195] Run: cat /version.json
	I1217 20:02:27.434111  656592 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-322567
	I1217 20:02:27.434346  656592 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1217 20:02:27.434448  656592 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-322567
	I1217 20:02:27.461107  656592 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33408 SSHKeyPath:/home/jenkins/minikube-integration/22186-372245/.minikube/machines/kubernetes-upgrade-322567/id_rsa Username:docker}
	I1217 20:02:27.461505  656592 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33408 SSHKeyPath:/home/jenkins/minikube-integration/22186-372245/.minikube/machines/kubernetes-upgrade-322567/id_rsa Username:docker}
	I1217 20:02:27.643742  656592 ssh_runner.go:195] Run: systemctl --version
	I1217 20:02:27.651827  656592 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1217 20:02:27.694427  656592 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1217 20:02:27.699906  656592 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1217 20:02:27.700006  656592 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1217 20:02:27.708402  656592 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1217 20:02:27.708431  656592 start.go:496] detecting cgroup driver to use...
	I1217 20:02:27.708465  656592 detect.go:190] detected "systemd" cgroup driver on host os
	I1217 20:02:27.708511  656592 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1217 20:02:27.727444  656592 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1217 20:02:27.742375  656592 docker.go:218] disabling cri-docker service (if available) ...
	I1217 20:02:27.742438  656592 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1217 20:02:27.762488  656592 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1217 20:02:27.779761  656592 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1217 20:02:27.900288  656592 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1217 20:02:28.014144  656592 docker.go:234] disabling docker service ...
	I1217 20:02:28.014233  656592 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1217 20:02:28.033012  656592 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1217 20:02:28.046924  656592 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1217 20:02:28.174768  656592 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1217 20:02:28.288548  656592 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1217 20:02:28.301658  656592 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1217 20:02:28.320927  656592 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1217 20:02:28.320994  656592 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:02:28.331991  656592 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1217 20:02:28.332070  656592 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:02:28.343615  656592 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:02:28.355210  656592 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:02:28.366355  656592 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1217 20:02:28.376724  656592 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:02:28.386830  656592 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:02:28.395696  656592 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:02:28.405059  656592 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1217 20:02:28.413336  656592 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1217 20:02:28.422321  656592 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 20:02:28.535923  656592 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1217 20:02:28.724835  656592 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1217 20:02:28.724927  656592 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1217 20:02:28.729280  656592 start.go:564] Will wait 60s for crictl version
	I1217 20:02:28.729342  656592 ssh_runner.go:195] Run: which crictl
	I1217 20:02:28.733169  656592 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1217 20:02:28.761146  656592 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1217 20:02:28.761245  656592 ssh_runner.go:195] Run: crio --version
	I1217 20:02:28.804868  656592 ssh_runner.go:195] Run: crio --version
	I1217 20:02:28.844797  656592 out.go:179] * Preparing Kubernetes v1.35.0-rc.1 on CRI-O 1.34.3 ...
	I1217 20:02:28.846267  656592 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-322567 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1217 20:02:28.876854  656592 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1217 20:02:28.891514  656592 kubeadm.go:884] updating cluster {Name:kubernetes-upgrade-322567 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:kubernetes-upgrade-322567 Namespace:default APIServerHAVIP: APIServ
erName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePat
h: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1217 20:02:28.891669  656592 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime crio
	I1217 20:02:28.891731  656592 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 20:02:28.937063  656592 crio.go:514] all images are preloaded for cri-o runtime.
	I1217 20:02:28.937119  656592 crio.go:433] Images already preloaded, skipping extraction
	I1217 20:02:28.937186  656592 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 20:02:28.971786  656592 crio.go:514] all images are preloaded for cri-o runtime.
	I1217 20:02:28.971815  656592 cache_images.go:86] Images are preloaded, skipping loading
	I1217 20:02:28.971825  656592 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0-rc.1 crio true true} ...
	I1217 20:02:28.971977  656592 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-rc.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=kubernetes-upgrade-322567 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-rc.1 ClusterName:kubernetes-upgrade-322567 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1217 20:02:28.972133  656592 ssh_runner.go:195] Run: crio config
	I1217 20:02:29.036908  656592 cni.go:84] Creating CNI manager for ""
	I1217 20:02:29.036930  656592 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1217 20:02:29.036957  656592 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1217 20:02:29.036980  656592 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0-rc.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-322567 NodeName:kubernetes-upgrade-322567 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.cr
t StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1217 20:02:29.037110  656592 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "kubernetes-upgrade-322567"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-rc.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1217 20:02:29.037175  656592 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-rc.1
	I1217 20:02:29.045827  656592 binaries.go:51] Found k8s binaries, skipping transfer
	I1217 20:02:29.045893  656592 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1217 20:02:29.054679  656592 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I1217 20:02:29.068635  656592 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1217 20:02:29.084451  656592 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2226 bytes)
	I1217 20:02:29.099946  656592 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1217 20:02:29.105193  656592 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 20:02:29.229942  656592 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 20:02:29.247062  656592 certs.go:69] Setting up /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/kubernetes-upgrade-322567 for IP: 192.168.76.2
	I1217 20:02:29.247119  656592 certs.go:195] generating shared ca certs ...
	I1217 20:02:29.247142  656592 certs.go:227] acquiring lock for ca certs: {Name:mk6c0a4a99609de13fb0b54aca94f9165cc7856c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 20:02:29.247326  656592 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22186-372245/.minikube/ca.key
	I1217 20:02:29.247395  656592 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22186-372245/.minikube/proxy-client-ca.key
	I1217 20:02:29.247409  656592 certs.go:257] generating profile certs ...
	I1217 20:02:29.247534  656592 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/kubernetes-upgrade-322567/client.key
	I1217 20:02:29.247600  656592 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/kubernetes-upgrade-322567/apiserver.key.2db7b9a3
	I1217 20:02:29.247663  656592 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/kubernetes-upgrade-322567/proxy-client.key
	I1217 20:02:29.247822  656592 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-372245/.minikube/certs/375797.pem (1338 bytes)
	W1217 20:02:29.247870  656592 certs.go:480] ignoring /home/jenkins/minikube-integration/22186-372245/.minikube/certs/375797_empty.pem, impossibly tiny 0 bytes
	I1217 20:02:29.247886  656592 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-372245/.minikube/certs/ca-key.pem (1675 bytes)
	I1217 20:02:29.247928  656592 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-372245/.minikube/certs/ca.pem (1082 bytes)
	I1217 20:02:29.247972  656592 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-372245/.minikube/certs/cert.pem (1123 bytes)
	I1217 20:02:29.248009  656592 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-372245/.minikube/certs/key.pem (1675 bytes)
	I1217 20:02:29.248111  656592 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-372245/.minikube/files/etc/ssl/certs/3757972.pem (1708 bytes)
	I1217 20:02:29.249030  656592 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1217 20:02:29.270146  656592 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1217 20:02:29.293014  656592 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1217 20:02:29.317841  656592 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1217 20:02:29.343243  656592 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/kubernetes-upgrade-322567/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1217 20:02:29.365410  656592 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/kubernetes-upgrade-322567/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1217 20:02:29.391284  656592 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/kubernetes-upgrade-322567/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1217 20:02:29.416212  656592 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/kubernetes-upgrade-322567/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1217 20:02:29.445476  656592 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/certs/375797.pem --> /usr/share/ca-certificates/375797.pem (1338 bytes)
	I1217 20:02:29.473949  656592 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/files/etc/ssl/certs/3757972.pem --> /usr/share/ca-certificates/3757972.pem (1708 bytes)
	I1217 20:02:29.499911  656592 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 20:02:29.526588  656592 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1217 20:02:29.544300  656592 ssh_runner.go:195] Run: openssl version
	I1217 20:02:29.552943  656592 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3757972.pem
	I1217 20:02:29.564309  656592 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3757972.pem /etc/ssl/certs/3757972.pem
	I1217 20:02:29.578058  656592 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3757972.pem
	I1217 20:02:29.584319  656592 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 17 19:32 /usr/share/ca-certificates/3757972.pem
	I1217 20:02:29.584484  656592 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3757972.pem
	I1217 20:02:29.637240  656592 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1217 20:02:29.647690  656592 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:02:29.659222  656592 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1217 20:02:29.670574  656592 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:02:29.677193  656592 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 19:24 /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:02:29.677282  656592 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:02:29.734672  656592 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1217 20:02:29.746328  656592 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/375797.pem
	I1217 20:02:29.755118  656592 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/375797.pem /etc/ssl/certs/375797.pem
	I1217 20:02:29.764857  656592 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/375797.pem
	I1217 20:02:29.770353  656592 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 17 19:32 /usr/share/ca-certificates/375797.pem
	I1217 20:02:29.770424  656592 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/375797.pem
	I1217 20:02:29.818607  656592 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1217 20:02:29.830111  656592 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1217 20:02:29.835851  656592 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1217 20:02:29.892635  656592 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1217 20:02:29.938671  656592 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1217 20:02:29.989324  656592 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1217 20:02:30.041445  656592 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1217 20:02:30.094798  656592 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1217 20:02:30.156935  656592 kubeadm.go:401] StartCluster: {Name:kubernetes-upgrade-322567 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:kubernetes-upgrade-322567 Namespace:default APIServerHAVIP: APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 20:02:30.157145  656592 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1217 20:02:30.157287  656592 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 20:02:30.196383  656592 cri.go:89] found id: "f20c3d774978aefefe3b71c9a753a8f36e6243b0c49bedf24d9b2878427f3ab5"
	I1217 20:02:30.196407  656592 cri.go:89] found id: "e2edbb87a291f9ffb08849a94dcfe691f52b14672463bddab26e7d6fca4c27c6"
	I1217 20:02:30.196413  656592 cri.go:89] found id: "6323ff6c5ccaddede5a650ed70a0ba4a7eef98458545ea75acd16def3f4683bd"
	I1217 20:02:30.196418  656592 cri.go:89] found id: "bae069ab95bb62217366b05e584c29c1ca4d8e18f5df479813be18651e40f4aa"
	I1217 20:02:30.196423  656592 cri.go:89] found id: "9edc4c7edfbcc481ef9463b4a5f05184f49baa725a76decb81ed842f3504d1ec"
	I1217 20:02:30.196428  656592 cri.go:89] found id: ""
	I1217 20:02:30.196468  656592 ssh_runner.go:195] Run: sudo runc list -f json
	W1217 20:02:30.213728  656592 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T20:02:30Z" level=error msg="open /run/runc: no such file or directory"
	I1217 20:02:30.213906  656592 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1217 20:02:30.227364  656592 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1217 20:02:30.227388  656592 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1217 20:02:30.227468  656592 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1217 20:02:30.238470  656592 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1217 20:02:30.239930  656592 kubeconfig.go:125] found "kubernetes-upgrade-322567" server: "https://192.168.76.2:8443"
	I1217 20:02:30.242278  656592 kapi.go:59] client config for kubernetes-upgrade-322567: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22186-372245/.minikube/profiles/kubernetes-upgrade-322567/client.crt", KeyFile:"/home/jenkins/minikube-integration/22186-372245/.minikube/profiles/kubernetes-upgrade-322567/client.key", CAFile:"/home/jenkins/minikube-integration/22186-372245/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(ni
l), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2817500), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1217 20:02:30.242822  656592 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1217 20:02:30.242850  656592 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1217 20:02:30.242858  656592 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1217 20:02:30.242863  656592 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1217 20:02:30.242868  656592 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1217 20:02:30.243362  656592 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1217 20:02:30.254889  656592 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1217 20:02:30.254930  656592 kubeadm.go:602] duration metric: took 27.534412ms to restartPrimaryControlPlane
	I1217 20:02:30.254942  656592 kubeadm.go:403] duration metric: took 98.022018ms to StartCluster
	I1217 20:02:30.254962  656592 settings.go:142] acquiring lock: {Name:mk01c60672ff2b8f50b037d6096a0a4590636830 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 20:02:30.255042  656592 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22186-372245/kubeconfig
	I1217 20:02:30.257476  656592 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-372245/kubeconfig: {Name:mkbe8926b9014d2af611aee93b1188b72880b6c1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 20:02:30.257814  656592 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1217 20:02:30.258006  656592 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1217 20:02:30.258142  656592 addons.go:70] Setting storage-provisioner=true in profile "kubernetes-upgrade-322567"
	I1217 20:02:30.258158  656592 addons.go:239] Setting addon storage-provisioner=true in "kubernetes-upgrade-322567"
	W1217 20:02:30.258167  656592 addons.go:248] addon storage-provisioner should already be in state true
	I1217 20:02:30.258205  656592 host.go:66] Checking if "kubernetes-upgrade-322567" exists ...
	I1217 20:02:30.258254  656592 config.go:182] Loaded profile config "kubernetes-upgrade-322567": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1217 20:02:30.258343  656592 addons.go:70] Setting default-storageclass=true in profile "kubernetes-upgrade-322567"
	I1217 20:02:30.258365  656592 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "kubernetes-upgrade-322567"
	I1217 20:02:30.258675  656592 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-322567 --format={{.State.Status}}
	I1217 20:02:30.258686  656592 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-322567 --format={{.State.Status}}
	I1217 20:02:30.259440  656592 out.go:179] * Verifying Kubernetes components...
	I1217 20:02:30.260783  656592 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 20:02:30.287801  656592 kapi.go:59] client config for kubernetes-upgrade-322567: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22186-372245/.minikube/profiles/kubernetes-upgrade-322567/client.crt", KeyFile:"/home/jenkins/minikube-integration/22186-372245/.minikube/profiles/kubernetes-upgrade-322567/client.key", CAFile:"/home/jenkins/minikube-integration/22186-372245/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(ni
l), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2817500), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1217 20:02:30.288175  656592 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	W1217 20:02:25.834041  649079 pod_ready.go:104] pod "coredns-66bc5c9577-lv4jd" is not "Ready", error: <nil>
	W1217 20:02:28.321180  649079 pod_ready.go:104] pod "coredns-66bc5c9577-lv4jd" is not "Ready", error: <nil>
	W1217 20:02:30.330920  649079 pod_ready.go:104] pod "coredns-66bc5c9577-lv4jd" is not "Ready", error: <nil>
	I1217 20:02:30.288243  656592 addons.go:239] Setting addon default-storageclass=true in "kubernetes-upgrade-322567"
	W1217 20:02:30.288261  656592 addons.go:248] addon default-storageclass should already be in state true
	I1217 20:02:30.288293  656592 host.go:66] Checking if "kubernetes-upgrade-322567" exists ...
	I1217 20:02:30.288775  656592 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-322567 --format={{.State.Status}}
	I1217 20:02:30.289451  656592 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 20:02:30.289470  656592 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1217 20:02:30.289522  656592 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-322567
	I1217 20:02:30.326187  656592 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33408 SSHKeyPath:/home/jenkins/minikube-integration/22186-372245/.minikube/machines/kubernetes-upgrade-322567/id_rsa Username:docker}
	I1217 20:02:30.329084  656592 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1217 20:02:30.329113  656592 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1217 20:02:30.329179  656592 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-322567
	I1217 20:02:30.361878  656592 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33408 SSHKeyPath:/home/jenkins/minikube-integration/22186-372245/.minikube/machines/kubernetes-upgrade-322567/id_rsa Username:docker}
	I1217 20:02:30.433343  656592 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 20:02:30.450871  656592 api_server.go:52] waiting for apiserver process to appear ...
	I1217 20:02:30.450970  656592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:02:30.457711  656592 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 20:02:30.463967  656592 api_server.go:72] duration metric: took 206.110154ms to wait for apiserver process to appear ...
	I1217 20:02:30.464003  656592 api_server.go:88] waiting for apiserver healthz status ...
	I1217 20:02:30.464028  656592 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1217 20:02:30.470900  656592 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1217 20:02:30.478311  656592 api_server.go:141] control plane version: v1.35.0-rc.1
	I1217 20:02:30.478348  656592 api_server.go:131] duration metric: took 14.335868ms to wait for apiserver health ...
	I1217 20:02:30.478361  656592 system_pods.go:43] waiting for kube-system pods to appear ...
	I1217 20:02:30.484153  656592 system_pods.go:59] 9 kube-system pods found
	I1217 20:02:30.484201  656592 system_pods.go:61] "coredns-7d764666f9-r8tb4" [2ce44657-e5ab-4272-b0cd-331f350c5dd3] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1217 20:02:30.484213  656592 system_pods.go:61] "coredns-7d764666f9-wbrcd" [6fb2d1e7-f2ce-4a03-8c43-ed016643a8f7] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1217 20:02:30.484224  656592 system_pods.go:61] "etcd-kubernetes-upgrade-322567" [d9600ace-85a3-42a4-b650-47656522a97e] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1217 20:02:30.484234  656592 system_pods.go:61] "kindnet-5chmq" [084575bd-7c45-4c8f-a275-56fb8077789d] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1217 20:02:30.484247  656592 system_pods.go:61] "kube-apiserver-kubernetes-upgrade-322567" [c3f37449-370f-4373-8bdb-6ae45c496fda] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1217 20:02:30.484260  656592 system_pods.go:61] "kube-controller-manager-kubernetes-upgrade-322567" [0cfe5989-c15d-46cf-99a3-427a6506099e] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1217 20:02:30.484266  656592 system_pods.go:61] "kube-proxy-pzwj5" [63d3a1e7-809d-4545-a94c-5484fcd67e42] Running
	I1217 20:02:30.484272  656592 system_pods.go:61] "kube-scheduler-kubernetes-upgrade-322567" [6d4e29cb-87f8-4713-8390-90c3d4b29d1b] Running
	I1217 20:02:30.484278  656592 system_pods.go:61] "storage-provisioner" [b37fc2ef-4f1e-4d16-b149-db4b33e5ce8e] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint(s). no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1217 20:02:30.484288  656592 system_pods.go:74] duration metric: took 5.919561ms to wait for pod list to return data ...
	I1217 20:02:30.484303  656592 kubeadm.go:587] duration metric: took 226.453008ms to wait for: map[apiserver:true system_pods:true]
	I1217 20:02:30.484321  656592 node_conditions.go:102] verifying NodePressure condition ...
	I1217 20:02:30.486284  656592 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1217 20:02:30.488050  656592 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1217 20:02:30.488104  656592 node_conditions.go:123] node cpu capacity is 8
	I1217 20:02:30.488122  656592 node_conditions.go:105] duration metric: took 3.794977ms to run NodePressure ...
	I1217 20:02:30.488139  656592 start.go:242] waiting for startup goroutines ...
	I1217 20:02:30.977140  656592 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1217 20:02:30.978647  656592 addons.go:530] duration metric: took 720.646675ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1217 20:02:30.978699  656592 start.go:247] waiting for cluster config update ...
	I1217 20:02:30.978716  656592 start.go:256] writing updated cluster config ...
	I1217 20:02:30.978981  656592 ssh_runner.go:195] Run: rm -f paused
	I1217 20:02:31.035868  656592 start.go:625] kubectl: 1.35.0, cluster: 1.35.0-rc.1 (minor skew: 0)
	I1217 20:02:31.037321  656592 out.go:179] * Done! kubectl is now configured to use "kubernetes-upgrade-322567" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 17 20:02:26 newest-cni-420762 crio[526]: time="2025-12-17T20:02:26.270441957Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 20:02:26 newest-cni-420762 crio[526]: time="2025-12-17T20:02:26.276037666Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=1ef5815f-6dc1-4737-89e0-4f7565794f6a name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 17 20:02:26 newest-cni-420762 crio[526]: time="2025-12-17T20:02:26.284791593Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 17 20:02:26 newest-cni-420762 crio[526]: time="2025-12-17T20:02:26.285806681Z" level=info msg="Ran pod sandbox 3216dc3e6a2ffab19a4485b7e3451b4bb803337898b2e601336467a4d64bac11 with infra container: kube-system/kube-proxy-qpt8z/POD" id=1ef5815f-6dc1-4737-89e0-4f7565794f6a name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 17 20:02:26 newest-cni-420762 crio[526]: time="2025-12-17T20:02:26.287309709Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=46350fcc-2be5-44a8-97d1-bca85c5a9c1a name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 17 20:02:26 newest-cni-420762 crio[526]: time="2025-12-17T20:02:26.288529193Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-rc.1" id=7bbbf57b-58c3-457f-a3fd-d4c07624f24a name=/runtime.v1.ImageService/ImageStatus
	Dec 17 20:02:26 newest-cni-420762 crio[526]: time="2025-12-17T20:02:26.289479555Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.35.0-rc.1" id=f41d10db-4423-48b2-9e9f-3e040f2d8286 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 20:02:26 newest-cni-420762 crio[526]: time="2025-12-17T20:02:26.291054129Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Dec 17 20:02:26 newest-cni-420762 crio[526]: time="2025-12-17T20:02:26.292619878Z" level=info msg="Ran pod sandbox 11f5fce47c19ca6451a94942c23de557969d2e81b6ce0a22a1481aa8a8fd7907 with infra container: kube-system/kindnet-2f44p/POD" id=46350fcc-2be5-44a8-97d1-bca85c5a9c1a name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 17 20:02:26 newest-cni-420762 crio[526]: time="2025-12-17T20:02:26.292898307Z" level=info msg="Creating container: kube-system/kube-proxy-qpt8z/kube-proxy" id=2bff2fc8-7eb1-4a19-89d3-1dfdc852b4d2 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 17 20:02:26 newest-cni-420762 crio[526]: time="2025-12-17T20:02:26.293030427Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 20:02:26 newest-cni-420762 crio[526]: time="2025-12-17T20:02:26.297385067Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88" id=3d4d8d3d-7cce-4c67-a93d-721cdd43046b name=/runtime.v1.ImageService/ImageStatus
	Dec 17 20:02:26 newest-cni-420762 crio[526]: time="2025-12-17T20:02:26.308765603Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 20:02:26 newest-cni-420762 crio[526]: time="2025-12-17T20:02:26.309016389Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88" id=e92227e7-4236-40af-9442-4400a1a3c750 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 20:02:26 newest-cni-420762 crio[526]: time="2025-12-17T20:02:26.309734647Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 20:02:26 newest-cni-420762 crio[526]: time="2025-12-17T20:02:26.311423792Z" level=info msg="Creating container: kube-system/kindnet-2f44p/kindnet-cni" id=cee7d116-70c7-469f-aca0-2f96d72ea6d8 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 17 20:02:26 newest-cni-420762 crio[526]: time="2025-12-17T20:02:26.311557412Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 20:02:26 newest-cni-420762 crio[526]: time="2025-12-17T20:02:26.317070017Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 20:02:26 newest-cni-420762 crio[526]: time="2025-12-17T20:02:26.317927263Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 20:02:26 newest-cni-420762 crio[526]: time="2025-12-17T20:02:26.351057249Z" level=info msg="Created container ff0ed23e8222fe8ec49653837dc558ec15a20e205a665b520caf64cdc5ae60dd: kube-system/kube-proxy-qpt8z/kube-proxy" id=2bff2fc8-7eb1-4a19-89d3-1dfdc852b4d2 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 17 20:02:26 newest-cni-420762 crio[526]: time="2025-12-17T20:02:26.352633779Z" level=info msg="Starting container: ff0ed23e8222fe8ec49653837dc558ec15a20e205a665b520caf64cdc5ae60dd" id=d1263143-a930-41f4-acae-6176ae03d055 name=/runtime.v1.RuntimeService/StartContainer
	Dec 17 20:02:26 newest-cni-420762 crio[526]: time="2025-12-17T20:02:26.353839326Z" level=info msg="Created container 20324d8b31869dc1f504d321c2a805ce4e571a579d66cb84c215eb02aa3b4f33: kube-system/kindnet-2f44p/kindnet-cni" id=cee7d116-70c7-469f-aca0-2f96d72ea6d8 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 17 20:02:26 newest-cni-420762 crio[526]: time="2025-12-17T20:02:26.354582458Z" level=info msg="Starting container: 20324d8b31869dc1f504d321c2a805ce4e571a579d66cb84c215eb02aa3b4f33" id=e841afc1-e026-477b-b6cf-f3d5b3586f2e name=/runtime.v1.RuntimeService/StartContainer
	Dec 17 20:02:26 newest-cni-420762 crio[526]: time="2025-12-17T20:02:26.356335688Z" level=info msg="Started container" PID=1056 containerID=ff0ed23e8222fe8ec49653837dc558ec15a20e205a665b520caf64cdc5ae60dd description=kube-system/kube-proxy-qpt8z/kube-proxy id=d1263143-a930-41f4-acae-6176ae03d055 name=/runtime.v1.RuntimeService/StartContainer sandboxID=3216dc3e6a2ffab19a4485b7e3451b4bb803337898b2e601336467a4d64bac11
	Dec 17 20:02:26 newest-cni-420762 crio[526]: time="2025-12-17T20:02:26.356466453Z" level=info msg="Started container" PID=1059 containerID=20324d8b31869dc1f504d321c2a805ce4e571a579d66cb84c215eb02aa3b4f33 description=kube-system/kindnet-2f44p/kindnet-cni id=e841afc1-e026-477b-b6cf-f3d5b3586f2e name=/runtime.v1.RuntimeService/StartContainer sandboxID=11f5fce47c19ca6451a94942c23de557969d2e81b6ce0a22a1481aa8a8fd7907
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	20324d8b31869       4921d7a6dffa922dd679732ba4797085c4f39e9a53bee8b6fdb1d463e8571251   6 seconds ago       Running             kindnet-cni               1                   11f5fce47c19c       kindnet-2f44p                               kube-system
	ff0ed23e8222f       af0321f3a4f388cfb978464739c323ebf891a7b0b50cdfd7179e92f141dad42a   6 seconds ago       Running             kube-proxy                1                   3216dc3e6a2ff       kube-proxy-qpt8z                            kube-system
	7d90e89ed2e6c       0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2   9 seconds ago       Running             etcd                      1                   4d291830c53f0       etcd-newest-cni-420762                      kube-system
	8544c715ea46d       5032a56602e1b9bd8856699701b6148aa1b9901d05b61f893df3b57f84aca614   9 seconds ago       Running             kube-controller-manager   1                   324146884dcfd       kube-controller-manager-newest-cni-420762   kube-system
	64b8df55df523       58865405a13bccac1d74bc3f446dddd22e6ef0d7ee8b52363c86dd31838976ce   9 seconds ago       Running             kube-apiserver            1                   3eff27b6e4242       kube-apiserver-newest-cni-420762            kube-system
	b7259506a4e5b       73f80cdc073daa4d501207f9e6dec1fa9eea5f27e8d347b8a0c4bad8811eecdc   9 seconds ago       Running             kube-scheduler            1                   a8b6d7b767cd1       kube-scheduler-newest-cni-420762            kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-420762
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=newest-cni-420762
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2e96f676eb7e96389e85fe0658a4ede4c4ba6924
	                    minikube.k8s.io/name=newest-cni-420762
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_17T20_01_49_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Dec 2025 20:01:45 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-420762
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Dec 2025 20:02:25 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Dec 2025 20:02:25 +0000   Wed, 17 Dec 2025 20:01:44 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Dec 2025 20:02:25 +0000   Wed, 17 Dec 2025 20:01:44 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Dec 2025 20:02:25 +0000   Wed, 17 Dec 2025 20:01:44 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Wed, 17 Dec 2025 20:02:25 +0000   Wed, 17 Dec 2025 20:01:44 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    newest-cni-420762
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 99cc213c06a11cdf07b2a4d26942818a
	  System UUID:                9a0da974-6b92-462d-a556-ee8264e627f2
	  Boot ID:                    832664c8-407a-4bff-a432-3bbc3f20421e
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.35.0-rc.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-420762                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         44s
	  kube-system                 kindnet-2f44p                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      39s
	  kube-system                 kube-apiserver-newest-cni-420762             250m (3%)     0 (0%)      0 (0%)           0 (0%)         44s
	  kube-system                 kube-controller-manager-newest-cni-420762    200m (2%)     0 (0%)      0 (0%)           0 (0%)         44s
	  kube-system                 kube-proxy-qpt8z                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         39s
	  kube-system                 kube-scheduler-newest-cni-420762             100m (1%)     0 (0%)      0 (0%)           0 (0%)         44s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  40s   node-controller  Node newest-cni-420762 event: Registered Node newest-cni-420762 in Controller
	  Normal  RegisteredNode  4s    node-controller  Node newest-cni-420762 event: Registered Node newest-cni-420762 in Controller
	
	
	==> dmesg <==
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 02 bf cf fd 8a f3 08 06
	[  +0.000372] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 46 d7 50 f9 50 96 08 06
	[Dec17 19:26] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000011] ll header: 00000000: 12 b8 6e 1b fb 93 de a2 46 23 bd 1e 08 00
	[  +1.015318] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 12 b8 6e 1b fb 93 de a2 46 23 bd 1e 08 00
	[  +1.023837] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 12 b8 6e 1b fb 93 de a2 46 23 bd 1e 08 00
	[  +1.023872] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 12 b8 6e 1b fb 93 de a2 46 23 bd 1e 08 00
	[  +1.023881] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 12 b8 6e 1b fb 93 de a2 46 23 bd 1e 08 00
	[  +1.023899] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 12 b8 6e 1b fb 93 de a2 46 23 bd 1e 08 00
	[  +2.047807] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: 12 b8 6e 1b fb 93 de a2 46 23 bd 1e 08 00
	[  +4.031540] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000022] ll header: 00000000: 12 b8 6e 1b fb 93 de a2 46 23 bd 1e 08 00
	[  +8.319118] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: 12 b8 6e 1b fb 93 de a2 46 23 bd 1e 08 00
	[ +16.382218] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 12 b8 6e 1b fb 93 de a2 46 23 bd 1e 08 00
	[Dec17 19:27] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 12 b8 6e 1b fb 93 de a2 46 23 bd 1e 08 00
	
	
	==> etcd [7d90e89ed2e6c5e28181da0ddfeb35b77f0b1a43e095576732addaa43e6437ba] <==
	{"level":"info","ts":"2025-12-17T20:02:23.582629Z","caller":"embed/etcd.go:640","msg":"serving peer traffic","address":"192.168.103.2:2380"}
	{"level":"info","ts":"2025-12-17T20:02:23.583456Z","caller":"embed/etcd.go:611","msg":"cmux::serve","address":"192.168.103.2:2380"}
	{"level":"info","ts":"2025-12-17T20:02:23.582656Z","caller":"embed/etcd.go:292","msg":"now serving peer/client/metrics","local-member-id":"f23060b075c4c089","initial-advertise-peer-urls":["https://192.168.103.2:2380"],"listen-peer-urls":["https://192.168.103.2:2380"],"advertise-client-urls":["https://192.168.103.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.103.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-12-17T20:02:23.582733Z","caller":"embed/etcd.go:890","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-12-17T20:02:23.582987Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1981","msg":"f23060b075c4c089 switched to configuration voters=(17451554867067011209)"}
	{"level":"info","ts":"2025-12-17T20:02:23.583715Z","caller":"membership/cluster.go:433","msg":"ignore already added member","cluster-id":"3336683c081d149d","local-member-id":"f23060b075c4c089","added-peer-id":"f23060b075c4c089","added-peer-peer-urls":["https://192.168.103.2:2380"],"added-peer-is-learner":false}
	{"level":"info","ts":"2025-12-17T20:02:23.583905Z","caller":"membership/cluster.go:674","msg":"updated cluster version","cluster-id":"3336683c081d149d","local-member-id":"f23060b075c4c089","from":"3.6","to":"3.6"}
	{"level":"info","ts":"2025-12-17T20:02:24.570189Z","logger":"raft","caller":"v3@v3.6.0/raft.go:988","msg":"f23060b075c4c089 is starting a new election at term 2"}
	{"level":"info","ts":"2025-12-17T20:02:24.570248Z","logger":"raft","caller":"v3@v3.6.0/raft.go:930","msg":"f23060b075c4c089 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-12-17T20:02:24.570330Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"f23060b075c4c089 received MsgPreVoteResp from f23060b075c4c089 at term 2"}
	{"level":"info","ts":"2025-12-17T20:02:24.570344Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"f23060b075c4c089 has received 1 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-17T20:02:24.570364Z","logger":"raft","caller":"v3@v3.6.0/raft.go:912","msg":"f23060b075c4c089 became candidate at term 3"}
	{"level":"info","ts":"2025-12-17T20:02:24.571293Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1077","msg":"f23060b075c4c089 received MsgVoteResp from f23060b075c4c089 at term 3"}
	{"level":"info","ts":"2025-12-17T20:02:24.571337Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1693","msg":"f23060b075c4c089 has received 1 MsgVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2025-12-17T20:02:24.571363Z","logger":"raft","caller":"v3@v3.6.0/raft.go:970","msg":"f23060b075c4c089 became leader at term 3"}
	{"level":"info","ts":"2025-12-17T20:02:24.571371Z","logger":"raft","caller":"v3@v3.6.0/node.go:370","msg":"raft.node: f23060b075c4c089 elected leader f23060b075c4c089 at term 3"}
	{"level":"info","ts":"2025-12-17T20:02:24.572796Z","caller":"etcdserver/server.go:1820","msg":"published local member to cluster through raft","local-member-id":"f23060b075c4c089","local-member-attributes":"{Name:newest-cni-420762 ClientURLs:[https://192.168.103.2:2379]}","cluster-id":"3336683c081d149d","publish-timeout":"7s"}
	{"level":"info","ts":"2025-12-17T20:02:24.572826Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-17T20:02:24.572850Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-17T20:02:24.572967Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-12-17T20:02:24.573036Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-12-17T20:02:24.574239Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-17T20:02:24.574395Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-17T20:02:24.579113Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.103.2:2379"}
	{"level":"info","ts":"2025-12-17T20:02:24.579110Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 20:02:33 up  1:45,  0 user,  load average: 5.57, 3.77, 2.57
	Linux newest-cni-420762 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [20324d8b31869dc1f504d321c2a805ce4e571a579d66cb84c215eb02aa3b4f33] <==
	I1217 20:02:26.515834       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1217 20:02:26.608373       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1217 20:02:26.608536       1 main.go:148] setting mtu 1500 for CNI 
	I1217 20:02:26.608567       1 main.go:178] kindnetd IP family: "ipv4"
	I1217 20:02:26.608604       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-17T20:02:26Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1217 20:02:26.809955       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1217 20:02:26.810008       1 controller.go:381] "Waiting for informer caches to sync"
	I1217 20:02:26.810022       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1217 20:02:26.810189       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1217 20:02:27.209120       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1217 20:02:27.210049       1 metrics.go:72] Registering metrics
	I1217 20:02:27.210219       1 controller.go:711] "Syncing nftables rules"
	
	
	==> kube-apiserver [64b8df55df5230a0b1d5727316ee323fddc47f3997c667cf27faf9dbec35288f] <==
	I1217 20:02:25.810066       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1217 20:02:25.810103       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1217 20:02:25.810337       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1217 20:02:25.810340       1 aggregator.go:187] initial CRD sync complete...
	I1217 20:02:25.810355       1 autoregister_controller.go:144] Starting autoregister controller
	I1217 20:02:25.810365       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1217 20:02:25.810371       1 cache.go:39] Caches are synced for autoregister controller
	I1217 20:02:25.812904       1 shared_informer.go:377] "Caches are synced"
	I1217 20:02:25.812985       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1217 20:02:25.824662       1 cidrallocator.go:302] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	E1217 20:02:25.829207       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1217 20:02:25.831024       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1217 20:02:25.863816       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1217 20:02:25.886033       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1217 20:02:26.163263       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1217 20:02:26.183321       1 controller.go:667] quota admission added evaluator for: namespaces
	I1217 20:02:26.220296       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1217 20:02:26.242804       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1217 20:02:26.251167       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1217 20:02:26.352374       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.102.242.224"}
	I1217 20:02:26.366436       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.103.23.40"}
	I1217 20:02:26.713939       1 storage_scheduling.go:139] all system priority classes are created successfully or already exist.
	I1217 20:02:29.361172       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1217 20:02:29.511761       1 controller.go:667] quota admission added evaluator for: endpoints
	I1217 20:02:29.562138       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [8544c715ea46d06c40c805d6d1253f17f885eca03855c5b880ed720d0fff20f4] <==
	I1217 20:02:28.974453       1 shared_informer.go:377] "Caches are synced"
	I1217 20:02:28.973525       1 shared_informer.go:377] "Caches are synced"
	I1217 20:02:28.974442       1 shared_informer.go:377] "Caches are synced"
	I1217 20:02:28.975413       1 range_allocator.go:177] "Sending events to api server"
	I1217 20:02:28.975472       1 range_allocator.go:181] "Starting range CIDR allocator"
	I1217 20:02:28.975488       1 shared_informer.go:370] "Waiting for caches to sync"
	I1217 20:02:28.975495       1 shared_informer.go:377] "Caches are synced"
	I1217 20:02:28.976543       1 shared_informer.go:377] "Caches are synced"
	I1217 20:02:28.973452       1 shared_informer.go:377] "Caches are synced"
	I1217 20:02:28.975223       1 node_lifecycle_controller.go:1038] "Controller detected that all Nodes are not-Ready. Entering master disruption mode"
	I1217 20:02:28.976906       1 shared_informer.go:377] "Caches are synced"
	I1217 20:02:28.976984       1 shared_informer.go:377] "Caches are synced"
	I1217 20:02:28.977025       1 shared_informer.go:377] "Caches are synced"
	I1217 20:02:28.977154       1 shared_informer.go:377] "Caches are synced"
	I1217 20:02:28.977250       1 shared_informer.go:377] "Caches are synced"
	I1217 20:02:28.977289       1 shared_informer.go:377] "Caches are synced"
	I1217 20:02:28.978175       1 shared_informer.go:377] "Caches are synced"
	I1217 20:02:28.979023       1 shared_informer.go:377] "Caches are synced"
	I1217 20:02:28.979221       1 shared_informer.go:377] "Caches are synced"
	I1217 20:02:28.979224       1 shared_informer.go:377] "Caches are synced"
	I1217 20:02:29.011009       1 shared_informer.go:377] "Caches are synced"
	I1217 20:02:29.074443       1 shared_informer.go:377] "Caches are synced"
	I1217 20:02:29.075502       1 shared_informer.go:377] "Caches are synced"
	I1217 20:02:29.075519       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1217 20:02:29.075545       1 garbagecollector.go:169] "Proceeding to collect garbage"
	
	
	==> kube-proxy [ff0ed23e8222fe8ec49653837dc558ec15a20e205a665b520caf64cdc5ae60dd] <==
	I1217 20:02:26.401619       1 server_linux.go:53] "Using iptables proxy"
	I1217 20:02:26.460346       1 shared_informer.go:370] "Waiting for caches to sync"
	I1217 20:02:26.561137       1 shared_informer.go:377] "Caches are synced"
	I1217 20:02:26.561176       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.103.2"]
	E1217 20:02:26.561279       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1217 20:02:26.580463       1 server.go:264] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1217 20:02:26.580528       1 server_linux.go:136] "Using iptables Proxier"
	I1217 20:02:26.585966       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1217 20:02:26.586364       1 server.go:529] "Version info" version="v1.35.0-rc.1"
	I1217 20:02:26.586388       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1217 20:02:26.587846       1 config.go:106] "Starting endpoint slice config controller"
	I1217 20:02:26.587857       1 config.go:200] "Starting service config controller"
	I1217 20:02:26.587874       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1217 20:02:26.587880       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1217 20:02:26.587972       1 config.go:403] "Starting serviceCIDR config controller"
	I1217 20:02:26.587980       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1217 20:02:26.588036       1 config.go:309] "Starting node config controller"
	I1217 20:02:26.588043       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1217 20:02:26.588049       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1217 20:02:26.688116       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1217 20:02:26.688116       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1217 20:02:26.688188       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [b7259506a4e5b6bee4d005c6c0116262f2d16fb84d5378bc6f468fae3b7d2570] <==
	I1217 20:02:23.940157       1 serving.go:386] Generated self-signed cert in-memory
	W1217 20:02:25.735738       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1217 20:02:25.735796       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1217 20:02:25.735811       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1217 20:02:25.735822       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1217 20:02:25.795377       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0-rc.1"
	I1217 20:02:25.795442       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1217 20:02:25.800569       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1217 20:02:25.800619       1 shared_informer.go:370] "Waiting for caches to sync"
	I1217 20:02:25.800701       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1217 20:02:25.800753       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1217 20:02:25.900833       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Dec 17 20:02:25 newest-cni-420762 kubelet[679]: I1217 20:02:25.850292     679 kubelet_node_status.go:77] "Successfully registered node" node="newest-cni-420762"
	Dec 17 20:02:25 newest-cni-420762 kubelet[679]: I1217 20:02:25.850325     679 kuberuntime_manager.go:2062] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Dec 17 20:02:25 newest-cni-420762 kubelet[679]: I1217 20:02:25.853587     679 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Dec 17 20:02:25 newest-cni-420762 kubelet[679]: E1217 20:02:25.891526     679 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-420762\" already exists" pod="kube-system/etcd-newest-cni-420762"
	Dec 17 20:02:25 newest-cni-420762 kubelet[679]: I1217 20:02:25.891564     679 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-420762"
	Dec 17 20:02:25 newest-cni-420762 kubelet[679]: E1217 20:02:25.899024     679 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-420762\" already exists" pod="kube-system/kube-apiserver-newest-cni-420762"
	Dec 17 20:02:25 newest-cni-420762 kubelet[679]: I1217 20:02:25.899066     679 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-newest-cni-420762"
	Dec 17 20:02:25 newest-cni-420762 kubelet[679]: E1217 20:02:25.908220     679 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-420762\" already exists" pod="kube-system/kube-controller-manager-newest-cni-420762"
	Dec 17 20:02:25 newest-cni-420762 kubelet[679]: I1217 20:02:25.908435     679 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-420762"
	Dec 17 20:02:25 newest-cni-420762 kubelet[679]: E1217 20:02:25.918702     679 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-420762\" already exists" pod="kube-system/kube-scheduler-newest-cni-420762"
	Dec 17 20:02:25 newest-cni-420762 kubelet[679]: I1217 20:02:25.955213     679 apiserver.go:52] "Watching apiserver"
	Dec 17 20:02:25 newest-cni-420762 kubelet[679]: E1217 20:02:25.962711     679 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-newest-cni-420762" containerName="kube-controller-manager"
	Dec 17 20:02:26 newest-cni-420762 kubelet[679]: E1217 20:02:26.007936     679 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-newest-cni-420762" containerName="kube-scheduler"
	Dec 17 20:02:26 newest-cni-420762 kubelet[679]: E1217 20:02:26.008323     679 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-newest-cni-420762" containerName="kube-apiserver"
	Dec 17 20:02:26 newest-cni-420762 kubelet[679]: E1217 20:02:26.008399     679 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-newest-cni-420762" containerName="etcd"
	Dec 17 20:02:26 newest-cni-420762 kubelet[679]: I1217 20:02:26.060327     679 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Dec 17 20:02:26 newest-cni-420762 kubelet[679]: I1217 20:02:26.159974     679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5bbdb455-62b1-48ac-a4d9-b930a3dc010f-xtables-lock\") pod \"kube-proxy-qpt8z\" (UID: \"5bbdb455-62b1-48ac-a4d9-b930a3dc010f\") " pod="kube-system/kube-proxy-qpt8z"
	Dec 17 20:02:26 newest-cni-420762 kubelet[679]: I1217 20:02:26.160032     679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5bbdb455-62b1-48ac-a4d9-b930a3dc010f-lib-modules\") pod \"kube-proxy-qpt8z\" (UID: \"5bbdb455-62b1-48ac-a4d9-b930a3dc010f\") " pod="kube-system/kube-proxy-qpt8z"
	Dec 17 20:02:26 newest-cni-420762 kubelet[679]: I1217 20:02:26.160101     679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/1888eaab-a42f-4c23-87e4-6c698a41af87-cni-cfg\") pod \"kindnet-2f44p\" (UID: \"1888eaab-a42f-4c23-87e4-6c698a41af87\") " pod="kube-system/kindnet-2f44p"
	Dec 17 20:02:26 newest-cni-420762 kubelet[679]: I1217 20:02:26.160126     679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1888eaab-a42f-4c23-87e4-6c698a41af87-xtables-lock\") pod \"kindnet-2f44p\" (UID: \"1888eaab-a42f-4c23-87e4-6c698a41af87\") " pod="kube-system/kindnet-2f44p"
	Dec 17 20:02:26 newest-cni-420762 kubelet[679]: I1217 20:02:26.160140     679 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1888eaab-a42f-4c23-87e4-6c698a41af87-lib-modules\") pod \"kindnet-2f44p\" (UID: \"1888eaab-a42f-4c23-87e4-6c698a41af87\") " pod="kube-system/kindnet-2f44p"
	Dec 17 20:02:27 newest-cni-420762 kubelet[679]: E1217 20:02:27.014315     679 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-newest-cni-420762" containerName="kube-scheduler"
	Dec 17 20:02:28 newest-cni-420762 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 17 20:02:28 newest-cni-420762 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 17 20:02:28 newest-cni-420762 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-420762 -n newest-cni-420762
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-420762 -n newest-cni-420762: exit status 2 (356.218964ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context newest-cni-420762 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:281: non-running pods: coredns-7d764666f9-jsv2j storage-provisioner dashboard-metrics-scraper-867fb5f87b-ztgjb kubernetes-dashboard-b84665fb8-d4677
helpers_test.go:283: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:286: (dbg) Run:  kubectl --context newest-cni-420762 describe pod coredns-7d764666f9-jsv2j storage-provisioner dashboard-metrics-scraper-867fb5f87b-ztgjb kubernetes-dashboard-b84665fb8-d4677
helpers_test.go:286: (dbg) Non-zero exit: kubectl --context newest-cni-420762 describe pod coredns-7d764666f9-jsv2j storage-provisioner dashboard-metrics-scraper-867fb5f87b-ztgjb kubernetes-dashboard-b84665fb8-d4677: exit status 1 (77.403873ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-7d764666f9-jsv2j" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-867fb5f87b-ztgjb" not found
	Error from server (NotFound): pods "kubernetes-dashboard-b84665fb8-d4677" not found

                                                
                                                
** /stderr **
helpers_test.go:288: kubectl --context newest-cni-420762 describe pod coredns-7d764666f9-jsv2j storage-provisioner dashboard-metrics-scraper-867fb5f87b-ztgjb kubernetes-dashboard-b84665fb8-d4677: exit status 1
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (6.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (7.38s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-759234 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p default-k8s-diff-port-759234 --alsologtostderr -v=1: exit status 80 (1.933712075s)

                                                
                                                
-- stdout --
	* Pausing node default-k8s-diff-port-759234 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1217 20:02:54.430666  667593 out.go:360] Setting OutFile to fd 1 ...
	I1217 20:02:54.431026  667593 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 20:02:54.431041  667593 out.go:374] Setting ErrFile to fd 2...
	I1217 20:02:54.431047  667593 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 20:02:54.431426  667593 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22186-372245/.minikube/bin
	I1217 20:02:54.431822  667593 out.go:368] Setting JSON to false
	I1217 20:02:54.431854  667593 mustload.go:66] Loading cluster: default-k8s-diff-port-759234
	I1217 20:02:54.432507  667593 config.go:182] Loaded profile config "default-k8s-diff-port-759234": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 20:02:54.433185  667593 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-759234 --format={{.State.Status}}
	I1217 20:02:54.458217  667593 host.go:66] Checking if "default-k8s-diff-port-759234" exists ...
	I1217 20:02:54.458591  667593 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 20:02:54.547065  667593 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:79 OomKillDisable:false NGoroutines:87 SystemTime:2025-12-17 20:02:54.535319071 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1217 20:02:54.547929  667593 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/22186/minikube-v1.37.0-1765965980-22186-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1765965980-22186/minikube-v1.37.0-1765965980-22186-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1765965980-22186-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:default-k8s-diff-port-759234 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s
(bool=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1217 20:02:54.549750  667593 out.go:179] * Pausing node default-k8s-diff-port-759234 ... 
	I1217 20:02:54.550822  667593 host.go:66] Checking if "default-k8s-diff-port-759234" exists ...
	I1217 20:02:54.551312  667593 ssh_runner.go:195] Run: systemctl --version
	I1217 20:02:54.551379  667593 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-759234
	I1217 20:02:54.573723  667593 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33468 SSHKeyPath:/home/jenkins/minikube-integration/22186-372245/.minikube/machines/default-k8s-diff-port-759234/id_rsa Username:docker}
	I1217 20:02:54.678622  667593 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 20:02:54.697426  667593 pause.go:52] kubelet running: true
	I1217 20:02:54.697499  667593 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1217 20:02:54.876925  667593 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1217 20:02:54.877015  667593 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1217 20:02:54.957642  667593 cri.go:89] found id: "b5fa7a549a8d242b4e3f2ea7764d147d6815e6a2a703c84f65f2d3f1d871969f"
	I1217 20:02:54.957668  667593 cri.go:89] found id: "1f92b0022b9d9a916df843f4334eb7bbb4b21ace14628e070640e5df15619f23"
	I1217 20:02:54.957674  667593 cri.go:89] found id: "b6958cd5a4d6c327cfb1850926f770862f2ba4f2b196595b819413ce72236040"
	I1217 20:02:54.957680  667593 cri.go:89] found id: "5c35d460d84a27be34da42a759162cb5bc58518237744639622166b502cc652a"
	I1217 20:02:54.957685  667593 cri.go:89] found id: "ff749e52a1c7b238ec2a3b689c2471463861c44182ba71da511bc1f90ba22d68"
	I1217 20:02:54.957697  667593 cri.go:89] found id: "d83a0fe0ebf9e431abfef83125000274ec881515d8b2fe37492a61682b8b7a56"
	I1217 20:02:54.957705  667593 cri.go:89] found id: "13df2853266238c53f3daab51af6a83329ec267b44072f537e38af71a0078c3f"
	I1217 20:02:54.957709  667593 cri.go:89] found id: "85ffda0bbbbe80bde1d1c7403094674a0f0d609d5aa8572f8c470fd845327c85"
	I1217 20:02:54.957714  667593 cri.go:89] found id: "4d360a4c3fd6f7b37c23d2fae6316c0a6398e536b4ed3c70d59262bc9cbab9c7"
	I1217 20:02:54.957738  667593 cri.go:89] found id: "cc3524e5a1365cf580ba863f7b11ab20cf3c5c9edb4e476ed6ee32739539386f"
	I1217 20:02:54.957746  667593 cri.go:89] found id: "f01e59b3a5bec96adc422b58a3f2d145f5ded1ce16afc6fa1bdf3418adf64dc8"
	I1217 20:02:54.957751  667593 cri.go:89] found id: ""
	I1217 20:02:54.957801  667593 ssh_runner.go:195] Run: sudo runc list -f json
	I1217 20:02:54.973324  667593 retry.go:31] will retry after 227.00298ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T20:02:54Z" level=error msg="open /run/runc: no such file or directory"
	I1217 20:02:55.200743  667593 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 20:02:55.215035  667593 pause.go:52] kubelet running: false
	I1217 20:02:55.215113  667593 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1217 20:02:55.436010  667593 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1217 20:02:55.436159  667593 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1217 20:02:55.529416  667593 cri.go:89] found id: "b5fa7a549a8d242b4e3f2ea7764d147d6815e6a2a703c84f65f2d3f1d871969f"
	I1217 20:02:55.529445  667593 cri.go:89] found id: "1f92b0022b9d9a916df843f4334eb7bbb4b21ace14628e070640e5df15619f23"
	I1217 20:02:55.529451  667593 cri.go:89] found id: "b6958cd5a4d6c327cfb1850926f770862f2ba4f2b196595b819413ce72236040"
	I1217 20:02:55.529456  667593 cri.go:89] found id: "5c35d460d84a27be34da42a759162cb5bc58518237744639622166b502cc652a"
	I1217 20:02:55.529459  667593 cri.go:89] found id: "ff749e52a1c7b238ec2a3b689c2471463861c44182ba71da511bc1f90ba22d68"
	I1217 20:02:55.529462  667593 cri.go:89] found id: "d83a0fe0ebf9e431abfef83125000274ec881515d8b2fe37492a61682b8b7a56"
	I1217 20:02:55.529465  667593 cri.go:89] found id: "13df2853266238c53f3daab51af6a83329ec267b44072f537e38af71a0078c3f"
	I1217 20:02:55.529467  667593 cri.go:89] found id: "85ffda0bbbbe80bde1d1c7403094674a0f0d609d5aa8572f8c470fd845327c85"
	I1217 20:02:55.529470  667593 cri.go:89] found id: "4d360a4c3fd6f7b37c23d2fae6316c0a6398e536b4ed3c70d59262bc9cbab9c7"
	I1217 20:02:55.529477  667593 cri.go:89] found id: "cc3524e5a1365cf580ba863f7b11ab20cf3c5c9edb4e476ed6ee32739539386f"
	I1217 20:02:55.529480  667593 cri.go:89] found id: "f01e59b3a5bec96adc422b58a3f2d145f5ded1ce16afc6fa1bdf3418adf64dc8"
	I1217 20:02:55.529483  667593 cri.go:89] found id: ""
	I1217 20:02:55.529526  667593 ssh_runner.go:195] Run: sudo runc list -f json
	I1217 20:02:55.542401  667593 retry.go:31] will retry after 462.321985ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T20:02:55Z" level=error msg="open /run/runc: no such file or directory"
	I1217 20:02:56.005923  667593 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 20:02:56.019851  667593 pause.go:52] kubelet running: false
	I1217 20:02:56.019952  667593 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1217 20:02:56.167003  667593 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1217 20:02:56.167119  667593 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1217 20:02:56.246428  667593 cri.go:89] found id: "b5fa7a549a8d242b4e3f2ea7764d147d6815e6a2a703c84f65f2d3f1d871969f"
	I1217 20:02:56.246449  667593 cri.go:89] found id: "1f92b0022b9d9a916df843f4334eb7bbb4b21ace14628e070640e5df15619f23"
	I1217 20:02:56.246455  667593 cri.go:89] found id: "b6958cd5a4d6c327cfb1850926f770862f2ba4f2b196595b819413ce72236040"
	I1217 20:02:56.246461  667593 cri.go:89] found id: "5c35d460d84a27be34da42a759162cb5bc58518237744639622166b502cc652a"
	I1217 20:02:56.246466  667593 cri.go:89] found id: "ff749e52a1c7b238ec2a3b689c2471463861c44182ba71da511bc1f90ba22d68"
	I1217 20:02:56.246471  667593 cri.go:89] found id: "d83a0fe0ebf9e431abfef83125000274ec881515d8b2fe37492a61682b8b7a56"
	I1217 20:02:56.246476  667593 cri.go:89] found id: "13df2853266238c53f3daab51af6a83329ec267b44072f537e38af71a0078c3f"
	I1217 20:02:56.246480  667593 cri.go:89] found id: "85ffda0bbbbe80bde1d1c7403094674a0f0d609d5aa8572f8c470fd845327c85"
	I1217 20:02:56.246484  667593 cri.go:89] found id: "4d360a4c3fd6f7b37c23d2fae6316c0a6398e536b4ed3c70d59262bc9cbab9c7"
	I1217 20:02:56.246499  667593 cri.go:89] found id: "cc3524e5a1365cf580ba863f7b11ab20cf3c5c9edb4e476ed6ee32739539386f"
	I1217 20:02:56.246508  667593 cri.go:89] found id: "f01e59b3a5bec96adc422b58a3f2d145f5ded1ce16afc6fa1bdf3418adf64dc8"
	I1217 20:02:56.246512  667593 cri.go:89] found id: ""
	I1217 20:02:56.246575  667593 ssh_runner.go:195] Run: sudo runc list -f json
	I1217 20:02:56.266327  667593 out.go:203] 
	W1217 20:02:56.270645  667593 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T20:02:56Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T20:02:56Z" level=error msg="open /run/runc: no such file or directory"
	
	W1217 20:02:56.270670  667593 out.go:285] * 
	* 
	W1217 20:02:56.275339  667593 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1217 20:02:56.276723  667593 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p default-k8s-diff-port-759234 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect default-k8s-diff-port-759234
helpers_test.go:244: (dbg) docker inspect default-k8s-diff-port-759234:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "fb8483ff8e2a14d378d4db3e15e7b37fbb77525e29d99d5e1de222fe462790b8",
	        "Created": "2025-12-17T20:00:47.282778313Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 649426,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-17T20:01:56.144994012Z",
	            "FinishedAt": "2025-12-17T20:01:54.925996647Z"
	        },
	        "Image": "sha256:e3abeb065413b7566dd42e98e204ab3ad174790743f1f5cd427036c11b49d7f1",
	        "ResolvConfPath": "/var/lib/docker/containers/fb8483ff8e2a14d378d4db3e15e7b37fbb77525e29d99d5e1de222fe462790b8/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/fb8483ff8e2a14d378d4db3e15e7b37fbb77525e29d99d5e1de222fe462790b8/hostname",
	        "HostsPath": "/var/lib/docker/containers/fb8483ff8e2a14d378d4db3e15e7b37fbb77525e29d99d5e1de222fe462790b8/hosts",
	        "LogPath": "/var/lib/docker/containers/fb8483ff8e2a14d378d4db3e15e7b37fbb77525e29d99d5e1de222fe462790b8/fb8483ff8e2a14d378d4db3e15e7b37fbb77525e29d99d5e1de222fe462790b8-json.log",
	        "Name": "/default-k8s-diff-port-759234",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-759234:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default-k8s-diff-port-759234",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "fb8483ff8e2a14d378d4db3e15e7b37fbb77525e29d99d5e1de222fe462790b8",
	                "LowerDir": "/var/lib/docker/overlay2/7843654506f5a98613c1255e49abf23e4cc9d5b1f941075f03bad1d85596baa7-init/diff:/var/lib/docker/overlay2/29727d664a8119dcd8d22d923cfdfa7d86f99088879bf2a113d907b51116eb38/diff",
	                "MergedDir": "/var/lib/docker/overlay2/7843654506f5a98613c1255e49abf23e4cc9d5b1f941075f03bad1d85596baa7/merged",
	                "UpperDir": "/var/lib/docker/overlay2/7843654506f5a98613c1255e49abf23e4cc9d5b1f941075f03bad1d85596baa7/diff",
	                "WorkDir": "/var/lib/docker/overlay2/7843654506f5a98613c1255e49abf23e4cc9d5b1f941075f03bad1d85596baa7/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-759234",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-759234/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-759234",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-759234",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-759234",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "6531244d21d300d973005bf3fd3904c3b327673ec76c465093a7f6a16906e5ff",
	            "SandboxKey": "/var/run/docker/netns/6531244d21d3",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33468"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33469"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33472"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33470"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33471"
	                    }
	                ]
	            },
	            "Networks": {
	                "default-k8s-diff-port-759234": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "034e5df717c044fefebfa38f3b7a5265a61b576bc983becdb12880ee6b18c027",
	                    "EndpointID": "c984acaeecfb9ebe7f9636503957058d8178a99e1a6f6842245d79e2728d0547",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "MacAddress": "e6:83:5b:c0:15:52",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-759234",
	                        "fb8483ff8e2a"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-759234 -n default-k8s-diff-port-759234
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-759234 -n default-k8s-diff-port-759234: exit status 2 (414.025101ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-759234 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-759234 logs -n 25: (1.445784114s)
helpers_test.go:261: TestStartStop/group/default-k8s-diff-port/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────
────────────┐
	│ COMMAND │                                                                                                                        ARGS                                                                                                                        │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────
────────────┤
	│ delete  │ -p old-k8s-version-894575                                                                                                                                                                                                                          │ old-k8s-version-894575       │ jenkins │ v1.37.0 │ 17 Dec 25 20:01 UTC │ 17 Dec 25 20:01 UTC │
	│ start   │ -p embed-certs-147021 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3                                                                                             │ embed-certs-147021           │ jenkins │ v1.37.0 │ 17 Dec 25 20:01 UTC │ 17 Dec 25 20:02 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-759234 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                 │ default-k8s-diff-port-759234 │ jenkins │ v1.37.0 │ 17 Dec 25 20:01 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-759234 --alsologtostderr -v=3                                                                                                                                                                                             │ default-k8s-diff-port-759234 │ jenkins │ v1.37.0 │ 17 Dec 25 20:01 UTC │ 17 Dec 25 20:01 UTC │
	│ addons  │ enable metrics-server -p newest-cni-420762 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                            │ newest-cni-420762            │ jenkins │ v1.37.0 │ 17 Dec 25 20:01 UTC │                     │
	│ addons  │ enable dashboard -p default-k8s-diff-port-759234 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                            │ default-k8s-diff-port-759234 │ jenkins │ v1.37.0 │ 17 Dec 25 20:01 UTC │ 17 Dec 25 20:01 UTC │
	│ start   │ -p default-k8s-diff-port-759234 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3                                                                           │ default-k8s-diff-port-759234 │ jenkins │ v1.37.0 │ 17 Dec 25 20:01 UTC │ 17 Dec 25 20:02 UTC │
	│ stop    │ -p newest-cni-420762 --alsologtostderr -v=3                                                                                                                                                                                                        │ newest-cni-420762            │ jenkins │ v1.37.0 │ 17 Dec 25 20:01 UTC │ 17 Dec 25 20:02 UTC │
	│ addons  │ enable dashboard -p newest-cni-420762 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                       │ newest-cni-420762            │ jenkins │ v1.37.0 │ 17 Dec 25 20:02 UTC │ 17 Dec 25 20:02 UTC │
	│ start   │ -p newest-cni-420762 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1 │ newest-cni-420762            │ jenkins │ v1.37.0 │ 17 Dec 25 20:02 UTC │ 17 Dec 25 20:02 UTC │
	│ addons  │ enable metrics-server -p embed-certs-147021 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                           │ embed-certs-147021           │ jenkins │ v1.37.0 │ 17 Dec 25 20:02 UTC │                     │
	│ start   │ -p kubernetes-upgrade-322567 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio                                                                                                                                  │ kubernetes-upgrade-322567    │ jenkins │ v1.37.0 │ 17 Dec 25 20:02 UTC │                     │
	│ start   │ -p kubernetes-upgrade-322567 --memory=3072 --kubernetes-version=v1.35.0-rc.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-322567    │ jenkins │ v1.37.0 │ 17 Dec 25 20:02 UTC │ 17 Dec 25 20:02 UTC │
	│ stop    │ -p embed-certs-147021 --alsologtostderr -v=3                                                                                                                                                                                                       │ embed-certs-147021           │ jenkins │ v1.37.0 │ 17 Dec 25 20:02 UTC │ 17 Dec 25 20:02 UTC │
	│ image   │ newest-cni-420762 image list --format=json                                                                                                                                                                                                         │ newest-cni-420762            │ jenkins │ v1.37.0 │ 17 Dec 25 20:02 UTC │ 17 Dec 25 20:02 UTC │
	│ pause   │ -p newest-cni-420762 --alsologtostderr -v=1                                                                                                                                                                                                        │ newest-cni-420762            │ jenkins │ v1.37.0 │ 17 Dec 25 20:02 UTC │                     │
	│ delete  │ -p kubernetes-upgrade-322567                                                                                                                                                                                                                       │ kubernetes-upgrade-322567    │ jenkins │ v1.37.0 │ 17 Dec 25 20:02 UTC │ 17 Dec 25 20:02 UTC │
	│ start   │ -p auto-601560 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio                                                                                                                            │ auto-601560                  │ jenkins │ v1.37.0 │ 17 Dec 25 20:02 UTC │                     │
	│ delete  │ -p newest-cni-420762                                                                                                                                                                                                                               │ newest-cni-420762            │ jenkins │ v1.37.0 │ 17 Dec 25 20:02 UTC │ 17 Dec 25 20:02 UTC │
	│ delete  │ -p newest-cni-420762                                                                                                                                                                                                                               │ newest-cni-420762            │ jenkins │ v1.37.0 │ 17 Dec 25 20:02 UTC │ 17 Dec 25 20:02 UTC │
	│ start   │ -p kindnet-601560 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio                                                                                                           │ kindnet-601560               │ jenkins │ v1.37.0 │ 17 Dec 25 20:02 UTC │                     │
	│ addons  │ enable dashboard -p embed-certs-147021 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                      │ embed-certs-147021           │ jenkins │ v1.37.0 │ 17 Dec 25 20:02 UTC │ 17 Dec 25 20:02 UTC │
	│ start   │ -p embed-certs-147021 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3                                                                                             │ embed-certs-147021           │ jenkins │ v1.37.0 │ 17 Dec 25 20:02 UTC │                     │
	│ image   │ default-k8s-diff-port-759234 image list --format=json                                                                                                                                                                                              │ default-k8s-diff-port-759234 │ jenkins │ v1.37.0 │ 17 Dec 25 20:02 UTC │ 17 Dec 25 20:02 UTC │
	│ pause   │ -p default-k8s-diff-port-759234 --alsologtostderr -v=1                                                                                                                                                                                             │ default-k8s-diff-port-759234 │ jenkins │ v1.37.0 │ 17 Dec 25 20:02 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────
────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/17 20:02:43
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1217 20:02:43.597307  663785 out.go:360] Setting OutFile to fd 1 ...
	I1217 20:02:43.597462  663785 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 20:02:43.597478  663785 out.go:374] Setting ErrFile to fd 2...
	I1217 20:02:43.597495  663785 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 20:02:43.597723  663785 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22186-372245/.minikube/bin
	I1217 20:02:43.598258  663785 out.go:368] Setting JSON to false
	I1217 20:02:43.599444  663785 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":6315,"bootTime":1765995449,"procs":304,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1217 20:02:43.599537  663785 start.go:143] virtualization: kvm guest
	I1217 20:02:43.601655  663785 out.go:179] * [embed-certs-147021] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1217 20:02:43.603367  663785 out.go:179]   - MINIKUBE_LOCATION=22186
	I1217 20:02:43.603423  663785 notify.go:221] Checking for updates...
	I1217 20:02:43.606402  663785 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 20:02:43.608803  663785 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22186-372245/kubeconfig
	I1217 20:02:43.612274  663785 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22186-372245/.minikube
	I1217 20:02:43.614281  663785 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1217 20:02:43.615778  663785 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 20:02:43.618527  663785 config.go:182] Loaded profile config "embed-certs-147021": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 20:02:43.619342  663785 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 20:02:43.650357  663785 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1217 20:02:43.650566  663785 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 20:02:43.720232  663785 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:73 OomKillDisable:false NGoroutines:90 SystemTime:2025-12-17 20:02:43.70903939 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1217 20:02:43.720389  663785 docker.go:319] overlay module found
	I1217 20:02:43.723268  663785 out.go:179] * Using the docker driver based on existing profile
	I1217 20:02:43.724535  663785 start.go:309] selected driver: docker
	I1217 20:02:43.724557  663785 start.go:927] validating driver "docker" against &{Name:embed-certs-147021 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:embed-certs-147021 Namespace:default APIServerHAVIP: APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 20:02:43.724681  663785 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 20:02:43.725432  663785 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 20:02:43.805834  663785 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:69 OomKillDisable:false NGoroutines:80 SystemTime:2025-12-17 20:02:43.785586246 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1217 20:02:43.806588  663785 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1217 20:02:43.806640  663785 cni.go:84] Creating CNI manager for ""
	I1217 20:02:43.806723  663785 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1217 20:02:43.806829  663785 start.go:353] cluster config:
	{Name:embed-certs-147021 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:embed-certs-147021 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 20:02:43.809493  663785 out.go:179] * Starting "embed-certs-147021" primary control-plane node in "embed-certs-147021" cluster
	I1217 20:02:43.811052  663785 cache.go:134] Beginning downloading kic base image for docker with crio
	I1217 20:02:43.812488  663785 out.go:179] * Pulling base image v0.0.48-1765966054-22186 ...
	I1217 20:02:43.480042  660659 cli_runner.go:164] Run: docker network inspect auto-601560 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1217 20:02:43.501548  660659 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1217 20:02:43.506583  660659 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 20:02:43.519861  660659 kubeadm.go:884] updating cluster {Name:auto-601560 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:auto-601560 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:
[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1217 20:02:43.519978  660659 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1217 20:02:43.520021  660659 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 20:02:43.558992  660659 crio.go:514] all images are preloaded for cri-o runtime.
	I1217 20:02:43.559021  660659 crio.go:433] Images already preloaded, skipping extraction
	I1217 20:02:43.559072  660659 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 20:02:43.588654  660659 crio.go:514] all images are preloaded for cri-o runtime.
	I1217 20:02:43.588677  660659 cache_images.go:86] Images are preloaded, skipping loading
	I1217 20:02:43.588687  660659 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.3 crio true true} ...
	I1217 20:02:43.588803  660659 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=auto-601560 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.3 ClusterName:auto-601560 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1217 20:02:43.588882  660659 ssh_runner.go:195] Run: crio config
	I1217 20:02:43.653811  660659 cni.go:84] Creating CNI manager for ""
	I1217 20:02:43.653835  660659 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1217 20:02:43.653858  660659 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1217 20:02:43.653913  660659 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:auto-601560 NodeName:auto-601560 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/m
anifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1217 20:02:43.654128  660659 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "auto-601560"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1217 20:02:43.654205  660659 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.3
	I1217 20:02:43.664467  660659 binaries.go:51] Found k8s binaries, skipping transfer
	I1217 20:02:43.664547  660659 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1217 20:02:43.675783  660659 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (361 bytes)
	I1217 20:02:43.694191  660659 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1217 20:02:43.714151  660659 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2207 bytes)
	I1217 20:02:43.730644  660659 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1217 20:02:43.734822  660659 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 20:02:43.749458  660659 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 20:02:43.813773  663785 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1217 20:02:43.813813  663785 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22186-372245/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4
	I1217 20:02:43.813827  663785 cache.go:65] Caching tarball of preloaded images
	I1217 20:02:43.813887  663785 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 in local docker daemon
	I1217 20:02:43.813957  663785 preload.go:238] Found /home/jenkins/minikube-integration/22186-372245/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1217 20:02:43.813974  663785 cache.go:68] Finished verifying existence of preloaded tar for v1.34.3 on crio
	I1217 20:02:43.814282  663785 profile.go:143] Saving config to /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/embed-certs-147021/config.json ...
	I1217 20:02:43.841065  663785 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 in local docker daemon, skipping pull
	I1217 20:02:43.841096  663785 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 exists in daemon, skipping load
	I1217 20:02:43.841119  663785 cache.go:243] Successfully downloaded all kic artifacts
	I1217 20:02:43.841160  663785 start.go:360] acquireMachinesLock for embed-certs-147021: {Name:mkc6328ab9d874d1f1fffe133279d94e48b1c6e9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 20:02:43.841261  663785 start.go:364] duration metric: took 51.764µs to acquireMachinesLock for "embed-certs-147021"
	I1217 20:02:43.841294  663785 start.go:96] Skipping create...Using existing machine configuration
	I1217 20:02:43.841305  663785 fix.go:54] fixHost starting: 
	I1217 20:02:43.841582  663785 cli_runner.go:164] Run: docker container inspect embed-certs-147021 --format={{.State.Status}}
	I1217 20:02:43.861935  663785 fix.go:112] recreateIfNeeded on embed-certs-147021: state=Stopped err=<nil>
	W1217 20:02:43.861983  663785 fix.go:138] unexpected machine state, will restart: <nil>
	I1217 20:02:43.869002  660659 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 20:02:43.890450  660659 certs.go:69] Setting up /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/auto-601560 for IP: 192.168.76.2
	I1217 20:02:43.890479  660659 certs.go:195] generating shared ca certs ...
	I1217 20:02:43.890502  660659 certs.go:227] acquiring lock for ca certs: {Name:mk6c0a4a99609de13fb0b54aca94f9165cc7856c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 20:02:43.890697  660659 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22186-372245/.minikube/ca.key
	I1217 20:02:43.890770  660659 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22186-372245/.minikube/proxy-client-ca.key
	I1217 20:02:43.890780  660659 certs.go:257] generating profile certs ...
	I1217 20:02:43.890856  660659 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/auto-601560/client.key
	I1217 20:02:43.890879  660659 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/auto-601560/client.crt with IP's: []
	I1217 20:02:43.964742  660659 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/auto-601560/client.crt ...
	I1217 20:02:43.964770  660659 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/auto-601560/client.crt: {Name:mk20ec0393b60e0059a93fa0ea47f7b86671a83c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 20:02:43.964940  660659 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/auto-601560/client.key ...
	I1217 20:02:43.964951  660659 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/auto-601560/client.key: {Name:mke6bb134cd14678ed704bb54f28dec2d4076df0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 20:02:43.965035  660659 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/auto-601560/apiserver.key.578310f4
	I1217 20:02:43.965049  660659 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/auto-601560/apiserver.crt.578310f4 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1217 20:02:44.032122  660659 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/auto-601560/apiserver.crt.578310f4 ...
	I1217 20:02:44.032198  660659 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/auto-601560/apiserver.crt.578310f4: {Name:mk8ed0294878a6563e8553297c9374261df588a5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 20:02:44.032385  660659 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/auto-601560/apiserver.key.578310f4 ...
	I1217 20:02:44.032404  660659 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/auto-601560/apiserver.key.578310f4: {Name:mke5fdbcc4fb97cd69180fb9179af9750210c230 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 20:02:44.032514  660659 certs.go:382] copying /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/auto-601560/apiserver.crt.578310f4 -> /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/auto-601560/apiserver.crt
	I1217 20:02:44.032620  660659 certs.go:386] copying /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/auto-601560/apiserver.key.578310f4 -> /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/auto-601560/apiserver.key
	I1217 20:02:44.032703  660659 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/auto-601560/proxy-client.key
	I1217 20:02:44.032725  660659 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/auto-601560/proxy-client.crt with IP's: []
	I1217 20:02:44.117499  660659 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/auto-601560/proxy-client.crt ...
	I1217 20:02:44.117536  660659 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/auto-601560/proxy-client.crt: {Name:mk08364f679f8c12e34b4a6a41dea1c7facafcd7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 20:02:44.117729  660659 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/auto-601560/proxy-client.key ...
	I1217 20:02:44.117748  660659 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/auto-601560/proxy-client.key: {Name:mk11c638c34b9bf51fbf913e4bda9172b3eef8d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 20:02:44.117976  660659 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-372245/.minikube/certs/375797.pem (1338 bytes)
	W1217 20:02:44.118025  660659 certs.go:480] ignoring /home/jenkins/minikube-integration/22186-372245/.minikube/certs/375797_empty.pem, impossibly tiny 0 bytes
	I1217 20:02:44.118039  660659 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-372245/.minikube/certs/ca-key.pem (1675 bytes)
	I1217 20:02:44.118092  660659 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-372245/.minikube/certs/ca.pem (1082 bytes)
	I1217 20:02:44.118135  660659 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-372245/.minikube/certs/cert.pem (1123 bytes)
	I1217 20:02:44.118170  660659 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-372245/.minikube/certs/key.pem (1675 bytes)
	I1217 20:02:44.118227  660659 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-372245/.minikube/files/etc/ssl/certs/3757972.pem (1708 bytes)
	I1217 20:02:44.118891  660659 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1217 20:02:44.140817  660659 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1217 20:02:44.161494  660659 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1217 20:02:44.182356  660659 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1217 20:02:44.202058  660659 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/auto-601560/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1415 bytes)
	I1217 20:02:44.220529  660659 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/auto-601560/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1217 20:02:44.239782  660659 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/auto-601560/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1217 20:02:44.258573  660659 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/auto-601560/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1217 20:02:44.283042  660659 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/files/etc/ssl/certs/3757972.pem --> /usr/share/ca-certificates/3757972.pem (1708 bytes)
	I1217 20:02:44.302241  660659 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 20:02:44.320437  660659 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/certs/375797.pem --> /usr/share/ca-certificates/375797.pem (1338 bytes)
	I1217 20:02:44.340559  660659 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1217 20:02:44.358336  660659 ssh_runner.go:195] Run: openssl version
	I1217 20:02:44.365258  660659 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/375797.pem
	I1217 20:02:44.373190  660659 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/375797.pem /etc/ssl/certs/375797.pem
	I1217 20:02:44.381397  660659 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/375797.pem
	I1217 20:02:44.385961  660659 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 17 19:32 /usr/share/ca-certificates/375797.pem
	I1217 20:02:44.386026  660659 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/375797.pem
	I1217 20:02:44.439602  660659 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1217 20:02:44.450064  660659 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/375797.pem /etc/ssl/certs/51391683.0
	I1217 20:02:44.458680  660659 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3757972.pem
	I1217 20:02:44.467017  660659 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3757972.pem /etc/ssl/certs/3757972.pem
	I1217 20:02:44.475609  660659 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3757972.pem
	I1217 20:02:44.479786  660659 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 17 19:32 /usr/share/ca-certificates/3757972.pem
	I1217 20:02:44.479841  660659 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3757972.pem
	I1217 20:02:44.517019  660659 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1217 20:02:44.525270  660659 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/3757972.pem /etc/ssl/certs/3ec20f2e.0
	I1217 20:02:44.533566  660659 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:02:44.541505  660659 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1217 20:02:44.549366  660659 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:02:44.553481  660659 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 19:24 /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:02:44.553546  660659 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:02:44.587542  660659 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1217 20:02:44.596369  660659 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1217 20:02:44.604210  660659 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1217 20:02:44.608366  660659 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1217 20:02:44.608435  660659 kubeadm.go:401] StartCluster: {Name:auto-601560 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:auto-601560 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetCl
ientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 20:02:44.608522  660659 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1217 20:02:44.608576  660659 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 20:02:44.637881  660659 cri.go:89] found id: ""
	I1217 20:02:44.637961  660659 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1217 20:02:44.646164  660659 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1217 20:02:44.654385  660659 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1217 20:02:44.654446  660659 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1217 20:02:44.663421  660659 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1217 20:02:44.663438  660659 kubeadm.go:158] found existing configuration files:
	
	I1217 20:02:44.663488  660659 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1217 20:02:44.671413  660659 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1217 20:02:44.671483  660659 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1217 20:02:44.679480  660659 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1217 20:02:44.687430  660659 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1217 20:02:44.687495  660659 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1217 20:02:44.695511  660659 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1217 20:02:44.704982  660659 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1217 20:02:44.705053  660659 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1217 20:02:44.713472  660659 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1217 20:02:44.722499  660659 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1217 20:02:44.722567  660659 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1217 20:02:44.731266  660659 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1217 20:02:44.769238  660659 kubeadm.go:319] [init] Using Kubernetes version: v1.34.3
	I1217 20:02:44.769313  660659 kubeadm.go:319] [preflight] Running pre-flight checks
	I1217 20:02:44.789783  660659 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1217 20:02:44.789905  660659 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1045-gcp
	I1217 20:02:44.789954  660659 kubeadm.go:319] OS: Linux
	I1217 20:02:44.790015  660659 kubeadm.go:319] CGROUPS_CPU: enabled
	I1217 20:02:44.790073  660659 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1217 20:02:44.790159  660659 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1217 20:02:44.790224  660659 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1217 20:02:44.790283  660659 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1217 20:02:44.790351  660659 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1217 20:02:44.790414  660659 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1217 20:02:44.790468  660659 kubeadm.go:319] CGROUPS_IO: enabled
	I1217 20:02:44.853314  660659 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1217 20:02:44.853449  660659 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1217 20:02:44.853610  660659 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1217 20:02:44.862326  660659 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1217 20:02:43.611400  661899 cli_runner.go:164] Run: docker container inspect kindnet-601560 --format={{.State.Running}}
	I1217 20:02:43.634670  661899 cli_runner.go:164] Run: docker container inspect kindnet-601560 --format={{.State.Status}}
	I1217 20:02:43.658419  661899 cli_runner.go:164] Run: docker exec kindnet-601560 stat /var/lib/dpkg/alternatives/iptables
	I1217 20:02:43.722281  661899 oci.go:144] the created container "kindnet-601560" has a running status.
	I1217 20:02:43.722316  661899 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22186-372245/.minikube/machines/kindnet-601560/id_rsa...
	I1217 20:02:43.760254  661899 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22186-372245/.minikube/machines/kindnet-601560/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1217 20:02:43.796812  661899 cli_runner.go:164] Run: docker container inspect kindnet-601560 --format={{.State.Status}}
	I1217 20:02:43.822805  661899 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1217 20:02:43.822836  661899 kic_runner.go:114] Args: [docker exec --privileged kindnet-601560 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1217 20:02:43.872271  661899 cli_runner.go:164] Run: docker container inspect kindnet-601560 --format={{.State.Status}}
	I1217 20:02:43.897400  661899 machine.go:94] provisionDockerMachine start ...
	I1217 20:02:43.897498  661899 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-601560
	I1217 20:02:43.928589  661899 main.go:143] libmachine: Using SSH client type: native
	I1217 20:02:43.929635  661899 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33483 <nil> <nil>}
	I1217 20:02:43.929667  661899 main.go:143] libmachine: About to run SSH command:
	hostname
	I1217 20:02:43.930479  661899 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:53518->127.0.0.1:33483: read: connection reset by peer
	I1217 20:02:47.080384  661899 main.go:143] libmachine: SSH cmd err, output: <nil>: kindnet-601560
	
	I1217 20:02:47.080417  661899 ubuntu.go:182] provisioning hostname "kindnet-601560"
	I1217 20:02:47.080481  661899 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-601560
	I1217 20:02:47.099092  661899 main.go:143] libmachine: Using SSH client type: native
	I1217 20:02:47.099335  661899 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33483 <nil> <nil>}
	I1217 20:02:47.099348  661899 main.go:143] libmachine: About to run SSH command:
	sudo hostname kindnet-601560 && echo "kindnet-601560" | sudo tee /etc/hostname
	I1217 20:02:47.258395  661899 main.go:143] libmachine: SSH cmd err, output: <nil>: kindnet-601560
	
	I1217 20:02:47.258499  661899 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-601560
	I1217 20:02:47.278943  661899 main.go:143] libmachine: Using SSH client type: native
	I1217 20:02:47.279227  661899 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33483 <nil> <nil>}
	I1217 20:02:47.279253  661899 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skindnet-601560' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kindnet-601560/g' /etc/hosts;
				else 
					echo '127.0.1.1 kindnet-601560' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1217 20:02:47.428844  661899 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1217 20:02:47.428891  661899 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22186-372245/.minikube CaCertPath:/home/jenkins/minikube-integration/22186-372245/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22186-372245/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22186-372245/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22186-372245/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22186-372245/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22186-372245/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22186-372245/.minikube}
	I1217 20:02:47.428936  661899 ubuntu.go:190] setting up certificates
	I1217 20:02:47.428955  661899 provision.go:84] configureAuth start
	I1217 20:02:47.429032  661899 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kindnet-601560
	I1217 20:02:47.451573  661899 provision.go:143] copyHostCerts
	I1217 20:02:47.451638  661899 exec_runner.go:144] found /home/jenkins/minikube-integration/22186-372245/.minikube/ca.pem, removing ...
	I1217 20:02:47.451651  661899 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22186-372245/.minikube/ca.pem
	I1217 20:02:47.451721  661899 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22186-372245/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22186-372245/.minikube/ca.pem (1082 bytes)
	I1217 20:02:47.451831  661899 exec_runner.go:144] found /home/jenkins/minikube-integration/22186-372245/.minikube/cert.pem, removing ...
	I1217 20:02:47.451841  661899 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22186-372245/.minikube/cert.pem
	I1217 20:02:47.451872  661899 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22186-372245/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22186-372245/.minikube/cert.pem (1123 bytes)
	I1217 20:02:47.451939  661899 exec_runner.go:144] found /home/jenkins/minikube-integration/22186-372245/.minikube/key.pem, removing ...
	I1217 20:02:47.451948  661899 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22186-372245/.minikube/key.pem
	I1217 20:02:47.451971  661899 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22186-372245/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22186-372245/.minikube/key.pem (1675 bytes)
	I1217 20:02:47.452026  661899 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22186-372245/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22186-372245/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22186-372245/.minikube/certs/ca-key.pem org=jenkins.kindnet-601560 san=[127.0.0.1 192.168.103.2 kindnet-601560 localhost minikube]
	I1217 20:02:47.563326  661899 provision.go:177] copyRemoteCerts
	I1217 20:02:47.563393  661899 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1217 20:02:47.563451  661899 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-601560
	I1217 20:02:47.582181  661899 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33483 SSHKeyPath:/home/jenkins/minikube-integration/22186-372245/.minikube/machines/kindnet-601560/id_rsa Username:docker}
	I1217 20:02:47.684908  661899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1217 20:02:47.709099  661899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/machines/server.pem --> /etc/docker/server.pem (1212 bytes)
	I1217 20:02:47.731457  661899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1217 20:02:47.750120  661899 provision.go:87] duration metric: took 321.144631ms to configureAuth
	I1217 20:02:47.750151  661899 ubuntu.go:206] setting minikube options for container-runtime
	I1217 20:02:47.750367  661899 config.go:182] Loaded profile config "kindnet-601560": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 20:02:47.750489  661899 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-601560
	I1217 20:02:47.770762  661899 main.go:143] libmachine: Using SSH client type: native
	I1217 20:02:47.770982  661899 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33483 <nil> <nil>}
	I1217 20:02:47.770998  661899 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1217 20:02:48.061288  661899 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1217 20:02:48.061319  661899 machine.go:97] duration metric: took 4.163895838s to provisionDockerMachine
	I1217 20:02:48.061333  661899 client.go:176] duration metric: took 9.328604528s to LocalClient.Create
	I1217 20:02:48.061362  661899 start.go:167] duration metric: took 9.328675971s to libmachine.API.Create "kindnet-601560"
	I1217 20:02:48.061378  661899 start.go:293] postStartSetup for "kindnet-601560" (driver="docker")
	I1217 20:02:48.061394  661899 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1217 20:02:48.061469  661899 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1217 20:02:48.061525  661899 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-601560
	I1217 20:02:48.082640  661899 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33483 SSHKeyPath:/home/jenkins/minikube-integration/22186-372245/.minikube/machines/kindnet-601560/id_rsa Username:docker}
	I1217 20:02:48.188417  661899 ssh_runner.go:195] Run: cat /etc/os-release
	I1217 20:02:48.192250  661899 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1217 20:02:48.192284  661899 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1217 20:02:48.192299  661899 filesync.go:126] Scanning /home/jenkins/minikube-integration/22186-372245/.minikube/addons for local assets ...
	I1217 20:02:48.192352  661899 filesync.go:126] Scanning /home/jenkins/minikube-integration/22186-372245/.minikube/files for local assets ...
	I1217 20:02:48.192420  661899 filesync.go:149] local asset: /home/jenkins/minikube-integration/22186-372245/.minikube/files/etc/ssl/certs/3757972.pem -> 3757972.pem in /etc/ssl/certs
	I1217 20:02:48.192509  661899 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1217 20:02:48.201128  661899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/files/etc/ssl/certs/3757972.pem --> /etc/ssl/certs/3757972.pem (1708 bytes)
	I1217 20:02:48.223483  661899 start.go:296] duration metric: took 162.083422ms for postStartSetup
	I1217 20:02:48.223865  661899 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kindnet-601560
	I1217 20:02:48.245051  661899 profile.go:143] Saving config to /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/kindnet-601560/config.json ...
	I1217 20:02:48.245373  661899 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 20:02:48.245420  661899 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-601560
	I1217 20:02:48.265186  661899 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33483 SSHKeyPath:/home/jenkins/minikube-integration/22186-372245/.minikube/machines/kindnet-601560/id_rsa Username:docker}
	I1217 20:02:48.366213  661899 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1217 20:02:48.371154  661899 start.go:128] duration metric: took 9.641617458s to createHost
	I1217 20:02:48.371184  661899 start.go:83] releasing machines lock for "kindnet-601560", held for 9.64176076s
	I1217 20:02:48.371269  661899 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kindnet-601560
	I1217 20:02:48.391475  661899 ssh_runner.go:195] Run: cat /version.json
	I1217 20:02:48.391546  661899 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1217 20:02:48.391557  661899 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-601560
	I1217 20:02:48.391636  661899 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-601560
	I1217 20:02:48.413608  661899 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33483 SSHKeyPath:/home/jenkins/minikube-integration/22186-372245/.minikube/machines/kindnet-601560/id_rsa Username:docker}
	I1217 20:02:48.413793  661899 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33483 SSHKeyPath:/home/jenkins/minikube-integration/22186-372245/.minikube/machines/kindnet-601560/id_rsa Username:docker}
	I1217 20:02:44.864701  660659 out.go:252]   - Generating certificates and keys ...
	I1217 20:02:44.864802  660659 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1217 20:02:44.864879  660659 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1217 20:02:45.136018  660659 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1217 20:02:45.319141  660659 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1217 20:02:45.467768  660659 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1217 20:02:46.336397  660659 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1217 20:02:46.414848  660659 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1217 20:02:46.415070  660659 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [auto-601560 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1217 20:02:46.645327  660659 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1217 20:02:46.645505  660659 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [auto-601560 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1217 20:02:46.972638  660659 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1217 20:02:47.223477  660659 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1217 20:02:47.375383  660659 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1217 20:02:47.375483  660659 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1217 20:02:47.823305  660659 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1217 20:02:47.930913  660659 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1217 20:02:48.096526  660659 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1217 20:02:48.275879  660659 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1217 20:02:48.494413  660659 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1217 20:02:48.494978  660659 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1217 20:02:48.500012  660659 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1217 20:02:43.864599  663785 out.go:252] * Restarting existing docker container for "embed-certs-147021" ...
	I1217 20:02:43.864710  663785 cli_runner.go:164] Run: docker start embed-certs-147021
	I1217 20:02:44.132669  663785 cli_runner.go:164] Run: docker container inspect embed-certs-147021 --format={{.State.Status}}
	I1217 20:02:44.154240  663785 kic.go:430] container "embed-certs-147021" state is running.
	I1217 20:02:44.154805  663785 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-147021
	I1217 20:02:44.178115  663785 profile.go:143] Saving config to /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/embed-certs-147021/config.json ...
	I1217 20:02:44.178408  663785 machine.go:94] provisionDockerMachine start ...
	I1217 20:02:44.178513  663785 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-147021
	I1217 20:02:44.198136  663785 main.go:143] libmachine: Using SSH client type: native
	I1217 20:02:44.198394  663785 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33488 <nil> <nil>}
	I1217 20:02:44.198407  663785 main.go:143] libmachine: About to run SSH command:
	hostname
	I1217 20:02:44.198898  663785 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:48296->127.0.0.1:33488: read: connection reset by peer
	I1217 20:02:47.348304  663785 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-147021
	
	I1217 20:02:47.348337  663785 ubuntu.go:182] provisioning hostname "embed-certs-147021"
	I1217 20:02:47.348419  663785 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-147021
	I1217 20:02:47.366963  663785 main.go:143] libmachine: Using SSH client type: native
	I1217 20:02:47.367192  663785 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33488 <nil> <nil>}
	I1217 20:02:47.367209  663785 main.go:143] libmachine: About to run SSH command:
	sudo hostname embed-certs-147021 && echo "embed-certs-147021" | sudo tee /etc/hostname
	I1217 20:02:47.527178  663785 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-147021
	
	I1217 20:02:47.527279  663785 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-147021
	I1217 20:02:47.547145  663785 main.go:143] libmachine: Using SSH client type: native
	I1217 20:02:47.547394  663785 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33488 <nil> <nil>}
	I1217 20:02:47.547420  663785 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-147021' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-147021/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-147021' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1217 20:02:47.694326  663785 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1217 20:02:47.694359  663785 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22186-372245/.minikube CaCertPath:/home/jenkins/minikube-integration/22186-372245/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22186-372245/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22186-372245/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22186-372245/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22186-372245/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22186-372245/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22186-372245/.minikube}
	I1217 20:02:47.694415  663785 ubuntu.go:190] setting up certificates
	I1217 20:02:47.694429  663785 provision.go:84] configureAuth start
	I1217 20:02:47.694487  663785 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-147021
	I1217 20:02:47.718735  663785 provision.go:143] copyHostCerts
	I1217 20:02:47.718817  663785 exec_runner.go:144] found /home/jenkins/minikube-integration/22186-372245/.minikube/ca.pem, removing ...
	I1217 20:02:47.718840  663785 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22186-372245/.minikube/ca.pem
	I1217 20:02:47.718908  663785 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22186-372245/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22186-372245/.minikube/ca.pem (1082 bytes)
	I1217 20:02:47.719038  663785 exec_runner.go:144] found /home/jenkins/minikube-integration/22186-372245/.minikube/cert.pem, removing ...
	I1217 20:02:47.719049  663785 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22186-372245/.minikube/cert.pem
	I1217 20:02:47.719109  663785 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22186-372245/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22186-372245/.minikube/cert.pem (1123 bytes)
	I1217 20:02:47.719218  663785 exec_runner.go:144] found /home/jenkins/minikube-integration/22186-372245/.minikube/key.pem, removing ...
	I1217 20:02:47.719229  663785 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22186-372245/.minikube/key.pem
	I1217 20:02:47.719256  663785 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22186-372245/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22186-372245/.minikube/key.pem (1675 bytes)
	I1217 20:02:47.719335  663785 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22186-372245/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22186-372245/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22186-372245/.minikube/certs/ca-key.pem org=jenkins.embed-certs-147021 san=[127.0.0.1 192.168.85.2 embed-certs-147021 localhost minikube]
	I1217 20:02:47.856517  663785 provision.go:177] copyRemoteCerts
	I1217 20:02:47.856586  663785 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1217 20:02:47.856629  663785 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-147021
	I1217 20:02:47.877532  663785 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33488 SSHKeyPath:/home/jenkins/minikube-integration/22186-372245/.minikube/machines/embed-certs-147021/id_rsa Username:docker}
	I1217 20:02:47.982215  663785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1217 20:02:48.001798  663785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1217 20:02:48.021223  663785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1217 20:02:48.040641  663785 provision.go:87] duration metric: took 346.194733ms to configureAuth
	I1217 20:02:48.040674  663785 ubuntu.go:206] setting minikube options for container-runtime
	I1217 20:02:48.040880  663785 config.go:182] Loaded profile config "embed-certs-147021": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 20:02:48.041029  663785 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-147021
	I1217 20:02:48.060669  663785 main.go:143] libmachine: Using SSH client type: native
	I1217 20:02:48.061027  663785 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33488 <nil> <nil>}
	I1217 20:02:48.061056  663785 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1217 20:02:48.431341  663785 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1217 20:02:48.431371  663785 machine.go:97] duration metric: took 4.252940597s to provisionDockerMachine
	I1217 20:02:48.431386  663785 start.go:293] postStartSetup for "embed-certs-147021" (driver="docker")
	I1217 20:02:48.431400  663785 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1217 20:02:48.431476  663785 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1217 20:02:48.431534  663785 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-147021
	I1217 20:02:48.456476  663785 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33488 SSHKeyPath:/home/jenkins/minikube-integration/22186-372245/.minikube/machines/embed-certs-147021/id_rsa Username:docker}
	I1217 20:02:48.564101  663785 ssh_runner.go:195] Run: cat /etc/os-release
	I1217 20:02:48.567966  663785 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1217 20:02:48.567999  663785 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1217 20:02:48.568014  663785 filesync.go:126] Scanning /home/jenkins/minikube-integration/22186-372245/.minikube/addons for local assets ...
	I1217 20:02:48.568092  663785 filesync.go:126] Scanning /home/jenkins/minikube-integration/22186-372245/.minikube/files for local assets ...
	I1217 20:02:48.568209  663785 filesync.go:149] local asset: /home/jenkins/minikube-integration/22186-372245/.minikube/files/etc/ssl/certs/3757972.pem -> 3757972.pem in /etc/ssl/certs
	I1217 20:02:48.568362  663785 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1217 20:02:48.576561  663785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/files/etc/ssl/certs/3757972.pem --> /etc/ssl/certs/3757972.pem (1708 bytes)
	I1217 20:02:48.595217  663785 start.go:296] duration metric: took 163.814903ms for postStartSetup
	I1217 20:02:48.595292  663785 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 20:02:48.595339  663785 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-147021
	I1217 20:02:48.501502  660659 out.go:252]   - Booting up control plane ...
	I1217 20:02:48.501656  660659 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1217 20:02:48.501776  660659 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1217 20:02:48.502348  660659 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1217 20:02:48.516278  660659 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1217 20:02:48.516435  660659 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1217 20:02:48.523781  660659 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1217 20:02:48.523985  660659 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1217 20:02:48.524057  660659 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1217 20:02:48.624366  660659 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1217 20:02:48.624548  660659 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1217 20:02:48.579398  661899 ssh_runner.go:195] Run: systemctl --version
	I1217 20:02:48.586164  661899 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1217 20:02:48.626011  661899 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1217 20:02:48.631109  661899 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1217 20:02:48.631187  661899 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1217 20:02:48.661358  661899 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1217 20:02:48.661381  661899 start.go:496] detecting cgroup driver to use...
	I1217 20:02:48.661414  661899 detect.go:190] detected "systemd" cgroup driver on host os
	I1217 20:02:48.661466  661899 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1217 20:02:48.679375  661899 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1217 20:02:48.692070  661899 docker.go:218] disabling cri-docker service (if available) ...
	I1217 20:02:48.692146  661899 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1217 20:02:48.708630  661899 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1217 20:02:48.729969  661899 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1217 20:02:48.829550  661899 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1217 20:02:48.924106  661899 docker.go:234] disabling docker service ...
	I1217 20:02:48.924201  661899 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1217 20:02:48.947385  661899 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1217 20:02:48.961958  661899 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1217 20:02:49.066770  661899 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1217 20:02:49.161061  661899 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1217 20:02:49.174468  661899 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1217 20:02:49.189683  661899 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1217 20:02:49.189752  661899 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:02:49.208286  661899 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1217 20:02:49.208443  661899 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:02:49.218617  661899 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:02:49.228519  661899 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:02:49.238280  661899 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1217 20:02:49.247419  661899 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:02:49.257212  661899 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:02:49.271455  661899 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:02:49.280614  661899 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1217 20:02:49.288252  661899 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1217 20:02:49.295603  661899 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 20:02:49.382931  661899 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1217 20:02:49.548787  661899 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1217 20:02:49.548860  661899 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1217 20:02:49.553975  661899 start.go:564] Will wait 60s for crictl version
	I1217 20:02:49.554042  661899 ssh_runner.go:195] Run: which crictl
	I1217 20:02:49.558923  661899 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1217 20:02:49.592357  661899 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1217 20:02:49.592500  661899 ssh_runner.go:195] Run: crio --version
	I1217 20:02:49.632429  661899 ssh_runner.go:195] Run: crio --version
	I1217 20:02:49.684016  661899 out.go:179] * Preparing Kubernetes v1.34.3 on CRI-O 1.34.3 ...
	I1217 20:02:48.615643  663785 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33488 SSHKeyPath:/home/jenkins/minikube-integration/22186-372245/.minikube/machines/embed-certs-147021/id_rsa Username:docker}
	I1217 20:02:48.717361  663785 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1217 20:02:48.722493  663785 fix.go:56] duration metric: took 4.881179695s for fixHost
	I1217 20:02:48.722525  663785 start.go:83] releasing machines lock for "embed-certs-147021", held for 4.881249387s
	I1217 20:02:48.722604  663785 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-147021
	I1217 20:02:48.743848  663785 ssh_runner.go:195] Run: cat /version.json
	I1217 20:02:48.743901  663785 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1217 20:02:48.743917  663785 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-147021
	I1217 20:02:48.743964  663785 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-147021
	I1217 20:02:48.769219  663785 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33488 SSHKeyPath:/home/jenkins/minikube-integration/22186-372245/.minikube/machines/embed-certs-147021/id_rsa Username:docker}
	I1217 20:02:48.773047  663785 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33488 SSHKeyPath:/home/jenkins/minikube-integration/22186-372245/.minikube/machines/embed-certs-147021/id_rsa Username:docker}
	I1217 20:02:48.932670  663785 ssh_runner.go:195] Run: systemctl --version
	I1217 20:02:48.939847  663785 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1217 20:02:48.979562  663785 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1217 20:02:48.984792  663785 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1217 20:02:48.984868  663785 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1217 20:02:48.995530  663785 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1217 20:02:48.995563  663785 start.go:496] detecting cgroup driver to use...
	I1217 20:02:48.995597  663785 detect.go:190] detected "systemd" cgroup driver on host os
	I1217 20:02:48.995644  663785 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1217 20:02:49.017579  663785 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1217 20:02:49.036716  663785 docker.go:218] disabling cri-docker service (if available) ...
	I1217 20:02:49.036796  663785 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1217 20:02:49.052759  663785 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1217 20:02:49.066858  663785 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1217 20:02:49.167570  663785 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1217 20:02:49.252733  663785 docker.go:234] disabling docker service ...
	I1217 20:02:49.252800  663785 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1217 20:02:49.267268  663785 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1217 20:02:49.281274  663785 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1217 20:02:49.370439  663785 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1217 20:02:49.457423  663785 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1217 20:02:49.473854  663785 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1217 20:02:49.492403  663785 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1217 20:02:49.492468  663785 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:02:49.505388  663785 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1217 20:02:49.505468  663785 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:02:49.516670  663785 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:02:49.527052  663785 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:02:49.539723  663785 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1217 20:02:49.552332  663785 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:02:49.564669  663785 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:02:49.577556  663785 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:02:49.592301  663785 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1217 20:02:49.603509  663785 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1217 20:02:49.615170  663785 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 20:02:49.721174  663785 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1217 20:02:49.890529  663785 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1217 20:02:49.890598  663785 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1217 20:02:49.895802  663785 start.go:564] Will wait 60s for crictl version
	I1217 20:02:49.895867  663785 ssh_runner.go:195] Run: which crictl
	I1217 20:02:49.900739  663785 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1217 20:02:49.933971  663785 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1217 20:02:49.934061  663785 ssh_runner.go:195] Run: crio --version
	I1217 20:02:49.969309  663785 ssh_runner.go:195] Run: crio --version
	I1217 20:02:50.024177  663785 out.go:179] * Preparing Kubernetes v1.34.3 on CRI-O 1.34.3 ...
	I1217 20:02:50.025394  663785 cli_runner.go:164] Run: docker network inspect embed-certs-147021 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1217 20:02:50.046888  663785 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1217 20:02:50.052845  663785 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 20:02:50.067008  663785 kubeadm.go:884] updating cluster {Name:embed-certs-147021 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:embed-certs-147021 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1217 20:02:50.067307  663785 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1217 20:02:50.067376  663785 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 20:02:50.107911  663785 crio.go:514] all images are preloaded for cri-o runtime.
	I1217 20:02:50.107936  663785 crio.go:433] Images already preloaded, skipping extraction
	I1217 20:02:50.108004  663785 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 20:02:50.140588  663785 crio.go:514] all images are preloaded for cri-o runtime.
	I1217 20:02:50.140613  663785 cache_images.go:86] Images are preloaded, skipping loading
	I1217 20:02:50.140624  663785 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.3 crio true true} ...
	I1217 20:02:50.140746  663785 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-147021 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.3 ClusterName:embed-certs-147021 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1217 20:02:50.140832  663785 ssh_runner.go:195] Run: crio config
	I1217 20:02:50.204906  663785 cni.go:84] Creating CNI manager for ""
	I1217 20:02:50.204929  663785 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1217 20:02:50.204946  663785 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1217 20:02:50.204969  663785 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-147021 NodeName:embed-certs-147021 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1217 20:02:50.205122  663785 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-147021"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1217 20:02:50.205197  663785 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.3
	I1217 20:02:50.214324  663785 binaries.go:51] Found k8s binaries, skipping transfer
	I1217 20:02:50.214403  663785 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1217 20:02:50.225656  663785 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1217 20:02:50.242737  663785 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1217 20:02:50.260770  663785 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1217 20:02:50.278188  663785 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1217 20:02:50.282910  663785 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 20:02:50.295374  663785 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 20:02:50.407365  663785 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 20:02:50.431348  663785 certs.go:69] Setting up /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/embed-certs-147021 for IP: 192.168.85.2
	I1217 20:02:50.431371  663785 certs.go:195] generating shared ca certs ...
	I1217 20:02:50.431394  663785 certs.go:227] acquiring lock for ca certs: {Name:mk6c0a4a99609de13fb0b54aca94f9165cc7856c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 20:02:50.431579  663785 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22186-372245/.minikube/ca.key
	I1217 20:02:50.431645  663785 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22186-372245/.minikube/proxy-client-ca.key
	I1217 20:02:50.431657  663785 certs.go:257] generating profile certs ...
	I1217 20:02:50.431781  663785 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/embed-certs-147021/client.key
	I1217 20:02:50.431862  663785 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/embed-certs-147021/apiserver.key.45939a3a
	I1217 20:02:50.431911  663785 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/embed-certs-147021/proxy-client.key
	I1217 20:02:50.432056  663785 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-372245/.minikube/certs/375797.pem (1338 bytes)
	W1217 20:02:50.432118  663785 certs.go:480] ignoring /home/jenkins/minikube-integration/22186-372245/.minikube/certs/375797_empty.pem, impossibly tiny 0 bytes
	I1217 20:02:50.432129  663785 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-372245/.minikube/certs/ca-key.pem (1675 bytes)
	I1217 20:02:50.432166  663785 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-372245/.minikube/certs/ca.pem (1082 bytes)
	I1217 20:02:50.432208  663785 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-372245/.minikube/certs/cert.pem (1123 bytes)
	I1217 20:02:50.432242  663785 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-372245/.minikube/certs/key.pem (1675 bytes)
	I1217 20:02:50.432309  663785 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-372245/.minikube/files/etc/ssl/certs/3757972.pem (1708 bytes)
	I1217 20:02:50.433284  663785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1217 20:02:50.463769  663785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1217 20:02:50.488659  663785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1217 20:02:50.513334  663785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1217 20:02:50.542448  663785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/embed-certs-147021/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1217 20:02:50.575166  663785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/embed-certs-147021/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1217 20:02:50.600582  663785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/embed-certs-147021/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1217 20:02:50.627050  663785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/embed-certs-147021/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1217 20:02:50.656451  663785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/files/etc/ssl/certs/3757972.pem --> /usr/share/ca-certificates/3757972.pem (1708 bytes)
	I1217 20:02:50.683235  663785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 20:02:50.707615  663785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/certs/375797.pem --> /usr/share/ca-certificates/375797.pem (1338 bytes)
	I1217 20:02:50.731374  663785 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1217 20:02:50.748832  663785 ssh_runner.go:195] Run: openssl version
	I1217 20:02:50.757380  663785 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/375797.pem
	I1217 20:02:50.767216  663785 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/375797.pem /etc/ssl/certs/375797.pem
	I1217 20:02:50.779323  663785 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/375797.pem
	I1217 20:02:50.784796  663785 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 17 19:32 /usr/share/ca-certificates/375797.pem
	I1217 20:02:50.784865  663785 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/375797.pem
	I1217 20:02:50.844694  663785 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1217 20:02:50.861389  663785 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3757972.pem
	I1217 20:02:50.872989  663785 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3757972.pem /etc/ssl/certs/3757972.pem
	I1217 20:02:50.884269  663785 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3757972.pem
	I1217 20:02:50.889153  663785 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 17 19:32 /usr/share/ca-certificates/3757972.pem
	I1217 20:02:50.889217  663785 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3757972.pem
	I1217 20:02:50.932104  663785 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1217 20:02:50.942603  663785 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:02:50.954742  663785 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1217 20:02:50.968858  663785 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:02:50.976199  663785 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 19:24 /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:02:50.976262  663785 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:02:51.040710  663785 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1217 20:02:51.052332  663785 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1217 20:02:51.057501  663785 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1217 20:02:51.116321  663785 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1217 20:02:51.190454  663785 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1217 20:02:51.250684  663785 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1217 20:02:51.309302  663785 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1217 20:02:51.360264  663785 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1217 20:02:51.416579  663785 kubeadm.go:401] StartCluster: {Name:embed-certs-147021 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:embed-certs-147021 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 20:02:51.416695  663785 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1217 20:02:51.416774  663785 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 20:02:51.478151  663785 cri.go:89] found id: "908edcd5f5289ef7311867639a5128a59a15dad0583e878557accbf26efa79fb"
	I1217 20:02:51.478197  663785 cri.go:89] found id: "9609c0cfa32a680d1b01f25906eb3fc99966c8e66cc7b424a4aaf43f25353e40"
	I1217 20:02:51.478203  663785 cri.go:89] found id: "65e71064f45025b16a8eeb57a2312f4a95a800aca4e77340fff8eb1b3e67c18d"
	I1217 20:02:51.478208  663785 cri.go:89] found id: "d703ea40f171a6defb08dbaa7f51e4cb839d82c4c6df2ff17c3ac6931834a231"
	I1217 20:02:51.478222  663785 cri.go:89] found id: ""
	I1217 20:02:51.478276  663785 ssh_runner.go:195] Run: sudo runc list -f json
	W1217 20:02:51.508331  663785 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T20:02:51Z" level=error msg="open /run/runc: no such file or directory"
	I1217 20:02:51.508425  663785 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1217 20:02:51.526578  663785 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1217 20:02:51.526603  663785 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1217 20:02:51.526653  663785 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1217 20:02:51.535937  663785 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1217 20:02:51.536655  663785 kubeconfig.go:47] verify endpoint returned: get endpoint: "embed-certs-147021" does not appear in /home/jenkins/minikube-integration/22186-372245/kubeconfig
	I1217 20:02:51.536957  663785 kubeconfig.go:62] /home/jenkins/minikube-integration/22186-372245/kubeconfig needs updating (will repair): [kubeconfig missing "embed-certs-147021" cluster setting kubeconfig missing "embed-certs-147021" context setting]
	I1217 20:02:51.537678  663785 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-372245/kubeconfig: {Name:mkbe8926b9014d2af611aee93b1188b72880b6c1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 20:02:51.539655  663785 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1217 20:02:51.551164  663785 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1217 20:02:51.551204  663785 kubeadm.go:602] duration metric: took 24.594853ms to restartPrimaryControlPlane
	I1217 20:02:51.551216  663785 kubeadm.go:403] duration metric: took 134.651056ms to StartCluster
	I1217 20:02:51.551242  663785 settings.go:142] acquiring lock: {Name:mk01c60672ff2b8f50b037d6096a0a4590636830 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 20:02:51.551320  663785 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22186-372245/kubeconfig
	I1217 20:02:51.552909  663785 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-372245/kubeconfig: {Name:mkbe8926b9014d2af611aee93b1188b72880b6c1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 20:02:51.553351  663785 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1217 20:02:51.553626  663785 config.go:182] Loaded profile config "embed-certs-147021": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 20:02:51.553706  663785 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1217 20:02:51.553805  663785 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-147021"
	I1217 20:02:51.553827  663785 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-147021"
	W1217 20:02:51.553838  663785 addons.go:248] addon storage-provisioner should already be in state true
	I1217 20:02:51.553863  663785 addons.go:70] Setting dashboard=true in profile "embed-certs-147021"
	I1217 20:02:51.553882  663785 host.go:66] Checking if "embed-certs-147021" exists ...
	I1217 20:02:51.553907  663785 addons.go:239] Setting addon dashboard=true in "embed-certs-147021"
	W1217 20:02:51.553920  663785 addons.go:248] addon dashboard should already be in state true
	I1217 20:02:51.553958  663785 host.go:66] Checking if "embed-certs-147021" exists ...
	I1217 20:02:51.554493  663785 cli_runner.go:164] Run: docker container inspect embed-certs-147021 --format={{.State.Status}}
	I1217 20:02:51.554516  663785 cli_runner.go:164] Run: docker container inspect embed-certs-147021 --format={{.State.Status}}
	I1217 20:02:51.554716  663785 addons.go:70] Setting default-storageclass=true in profile "embed-certs-147021"
	I1217 20:02:51.554738  663785 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-147021"
	I1217 20:02:51.555021  663785 cli_runner.go:164] Run: docker container inspect embed-certs-147021 --format={{.State.Status}}
	I1217 20:02:51.555916  663785 out.go:179] * Verifying Kubernetes components...
	I1217 20:02:51.557128  663785 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 20:02:51.592216  663785 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1217 20:02:51.594244  663785 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 20:02:51.594276  663785 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1217 20:02:51.594350  663785 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-147021
	I1217 20:02:51.594646  663785 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1217 20:02:51.596060  663785 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1217 20:02:49.685304  661899 cli_runner.go:164] Run: docker network inspect kindnet-601560 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1217 20:02:49.704985  661899 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1217 20:02:49.709915  661899 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 20:02:49.721547  661899 kubeadm.go:884] updating cluster {Name:kindnet-601560 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:kindnet-601560 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServer
Names:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePa
th: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1217 20:02:49.721717  661899 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1217 20:02:49.721782  661899 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 20:02:49.758304  661899 crio.go:514] all images are preloaded for cri-o runtime.
	I1217 20:02:49.758330  661899 crio.go:433] Images already preloaded, skipping extraction
	I1217 20:02:49.758385  661899 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 20:02:49.790201  661899 crio.go:514] all images are preloaded for cri-o runtime.
	I1217 20:02:49.790227  661899 cache_images.go:86] Images are preloaded, skipping loading
	I1217 20:02:49.790237  661899 kubeadm.go:935] updating node { 192.168.103.2 8443 v1.34.3 crio true true} ...
	I1217 20:02:49.790343  661899 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=kindnet-601560 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.3 ClusterName:kindnet-601560 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet}
	I1217 20:02:49.790419  661899 ssh_runner.go:195] Run: crio config
	I1217 20:02:49.853038  661899 cni.go:84] Creating CNI manager for "kindnet"
	I1217 20:02:49.853093  661899 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1217 20:02:49.853123  661899 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.34.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kindnet-601560 NodeName:kindnet-601560 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kub
ernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1217 20:02:49.853284  661899 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "kindnet-601560"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1217 20:02:49.853356  661899 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.3
	I1217 20:02:49.863649  661899 binaries.go:51] Found k8s binaries, skipping transfer
	I1217 20:02:49.863714  661899 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1217 20:02:49.872398  661899 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (365 bytes)
	I1217 20:02:49.887241  661899 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1217 20:02:49.905533  661899 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2213 bytes)
	I1217 20:02:49.922815  661899 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1217 20:02:49.927135  661899 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 20:02:49.940862  661899 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 20:02:50.053437  661899 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 20:02:50.079323  661899 certs.go:69] Setting up /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/kindnet-601560 for IP: 192.168.103.2
	I1217 20:02:50.079347  661899 certs.go:195] generating shared ca certs ...
	I1217 20:02:50.079368  661899 certs.go:227] acquiring lock for ca certs: {Name:mk6c0a4a99609de13fb0b54aca94f9165cc7856c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 20:02:50.079533  661899 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22186-372245/.minikube/ca.key
	I1217 20:02:50.079591  661899 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22186-372245/.minikube/proxy-client-ca.key
	I1217 20:02:50.079604  661899 certs.go:257] generating profile certs ...
	I1217 20:02:50.079674  661899 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/kindnet-601560/client.key
	I1217 20:02:50.079691  661899 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/kindnet-601560/client.crt with IP's: []
	I1217 20:02:50.127324  661899 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/kindnet-601560/client.crt ...
	I1217 20:02:50.127359  661899 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/kindnet-601560/client.crt: {Name:mked69a287e12e7b6e8886165202d8cac053de52 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 20:02:50.127576  661899 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/kindnet-601560/client.key ...
	I1217 20:02:50.127587  661899 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/kindnet-601560/client.key: {Name:mk1208f00603053aee8fdb54d644709e1cf3fd77 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 20:02:50.127691  661899 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/kindnet-601560/apiserver.key.a1245fe9
	I1217 20:02:50.127708  661899 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/kindnet-601560/apiserver.crt.a1245fe9 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.103.2]
	I1217 20:02:50.152282  661899 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/kindnet-601560/apiserver.crt.a1245fe9 ...
	I1217 20:02:50.152317  661899 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/kindnet-601560/apiserver.crt.a1245fe9: {Name:mk640a75542df3c1b914c56b6ca96b6c4b85975c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 20:02:50.152510  661899 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/kindnet-601560/apiserver.key.a1245fe9 ...
	I1217 20:02:50.152530  661899 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/kindnet-601560/apiserver.key.a1245fe9: {Name:mke3f095329d88a6c31bb2a355d65602ccdd02cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 20:02:50.152642  661899 certs.go:382] copying /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/kindnet-601560/apiserver.crt.a1245fe9 -> /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/kindnet-601560/apiserver.crt
	I1217 20:02:50.152743  661899 certs.go:386] copying /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/kindnet-601560/apiserver.key.a1245fe9 -> /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/kindnet-601560/apiserver.key
	I1217 20:02:50.152830  661899 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/kindnet-601560/proxy-client.key
	I1217 20:02:50.152857  661899 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/kindnet-601560/proxy-client.crt with IP's: []
	I1217 20:02:50.201241  661899 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/kindnet-601560/proxy-client.crt ...
	I1217 20:02:50.201273  661899 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/kindnet-601560/proxy-client.crt: {Name:mk4eabf7add40b088cfa86c718b2dccfa597a940 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 20:02:50.201512  661899 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/kindnet-601560/proxy-client.key ...
	I1217 20:02:50.201539  661899 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/kindnet-601560/proxy-client.key: {Name:mk556df6edc58ef0c5447fd5ad71c1189aa37eee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 20:02:50.201801  661899 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-372245/.minikube/certs/375797.pem (1338 bytes)
	W1217 20:02:50.201846  661899 certs.go:480] ignoring /home/jenkins/minikube-integration/22186-372245/.minikube/certs/375797_empty.pem, impossibly tiny 0 bytes
	I1217 20:02:50.201857  661899 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-372245/.minikube/certs/ca-key.pem (1675 bytes)
	I1217 20:02:50.201889  661899 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-372245/.minikube/certs/ca.pem (1082 bytes)
	I1217 20:02:50.201932  661899 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-372245/.minikube/certs/cert.pem (1123 bytes)
	I1217 20:02:50.201965  661899 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-372245/.minikube/certs/key.pem (1675 bytes)
	I1217 20:02:50.202020  661899 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-372245/.minikube/files/etc/ssl/certs/3757972.pem (1708 bytes)
	I1217 20:02:50.202844  661899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1217 20:02:50.226717  661899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1217 20:02:50.251643  661899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1217 20:02:50.278437  661899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1217 20:02:50.300603  661899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/kindnet-601560/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1217 20:02:50.323610  661899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/kindnet-601560/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1217 20:02:50.356993  661899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/kindnet-601560/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1217 20:02:50.381194  661899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/kindnet-601560/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1217 20:02:50.405245  661899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/certs/375797.pem --> /usr/share/ca-certificates/375797.pem (1338 bytes)
	I1217 20:02:50.431988  661899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/files/etc/ssl/certs/3757972.pem --> /usr/share/ca-certificates/3757972.pem (1708 bytes)
	I1217 20:02:50.462279  661899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 20:02:50.487883  661899 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1217 20:02:50.507222  661899 ssh_runner.go:195] Run: openssl version
	I1217 20:02:50.516250  661899 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3757972.pem
	I1217 20:02:50.527440  661899 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3757972.pem /etc/ssl/certs/3757972.pem
	I1217 20:02:50.540054  661899 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3757972.pem
	I1217 20:02:50.545673  661899 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 17 19:32 /usr/share/ca-certificates/3757972.pem
	I1217 20:02:50.545737  661899 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3757972.pem
	I1217 20:02:50.606471  661899 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1217 20:02:50.616914  661899 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/3757972.pem /etc/ssl/certs/3ec20f2e.0
	I1217 20:02:50.627593  661899 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:02:50.642471  661899 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1217 20:02:50.653265  661899 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:02:50.658590  661899 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 19:24 /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:02:50.658659  661899 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:02:50.710062  661899 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1217 20:02:50.720146  661899 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1217 20:02:50.731152  661899 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/375797.pem
	I1217 20:02:50.742736  661899 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/375797.pem /etc/ssl/certs/375797.pem
	I1217 20:02:50.752789  661899 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/375797.pem
	I1217 20:02:50.758028  661899 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 17 19:32 /usr/share/ca-certificates/375797.pem
	I1217 20:02:50.758139  661899 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/375797.pem
	I1217 20:02:50.813579  661899 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1217 20:02:50.823400  661899 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/375797.pem /etc/ssl/certs/51391683.0
	I1217 20:02:50.835946  661899 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1217 20:02:50.841541  661899 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1217 20:02:50.841603  661899 kubeadm.go:401] StartCluster: {Name:kindnet-601560 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:kindnet-601560 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNam
es:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 20:02:50.841694  661899 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1217 20:02:50.842268  661899 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 20:02:50.884186  661899 cri.go:89] found id: ""
	I1217 20:02:50.884273  661899 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1217 20:02:50.895011  661899 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1217 20:02:50.906757  661899 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1217 20:02:50.906914  661899 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1217 20:02:50.916804  661899 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1217 20:02:50.916833  661899 kubeadm.go:158] found existing configuration files:
	
	I1217 20:02:50.916893  661899 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1217 20:02:50.926277  661899 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1217 20:02:50.926354  661899 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1217 20:02:50.935574  661899 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1217 20:02:50.946371  661899 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1217 20:02:50.946442  661899 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1217 20:02:50.960819  661899 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1217 20:02:50.976844  661899 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1217 20:02:50.976900  661899 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1217 20:02:50.988053  661899 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1217 20:02:51.001786  661899 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1217 20:02:51.001919  661899 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1217 20:02:51.016954  661899 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1217 20:02:51.118794  661899 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1045-gcp\n", err: exit status 1
	I1217 20:02:51.234275  661899 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1217 20:02:51.597243  663785 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1217 20:02:51.597435  663785 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1217 20:02:51.597515  663785 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-147021
	I1217 20:02:51.598990  663785 addons.go:239] Setting addon default-storageclass=true in "embed-certs-147021"
	W1217 20:02:51.599020  663785 addons.go:248] addon default-storageclass should already be in state true
	I1217 20:02:51.599049  663785 host.go:66] Checking if "embed-certs-147021" exists ...
	I1217 20:02:51.599561  663785 cli_runner.go:164] Run: docker container inspect embed-certs-147021 --format={{.State.Status}}
	I1217 20:02:51.633833  663785 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1217 20:02:51.633857  663785 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1217 20:02:51.633945  663785 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-147021
	I1217 20:02:51.638311  663785 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33488 SSHKeyPath:/home/jenkins/minikube-integration/22186-372245/.minikube/machines/embed-certs-147021/id_rsa Username:docker}
	I1217 20:02:51.655387  663785 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33488 SSHKeyPath:/home/jenkins/minikube-integration/22186-372245/.minikube/machines/embed-certs-147021/id_rsa Username:docker}
	I1217 20:02:51.682260  663785 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33488 SSHKeyPath:/home/jenkins/minikube-integration/22186-372245/.minikube/machines/embed-certs-147021/id_rsa Username:docker}
	I1217 20:02:51.781215  663785 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 20:02:51.814681  663785 node_ready.go:35] waiting up to 6m0s for node "embed-certs-147021" to be "Ready" ...
	I1217 20:02:51.828341  663785 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1217 20:02:51.828414  663785 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1217 20:02:51.831490  663785 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 20:02:51.845768  663785 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1217 20:02:51.845791  663785 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1217 20:02:51.857549  663785 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1217 20:02:51.864033  663785 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1217 20:02:51.864061  663785 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1217 20:02:51.889960  663785 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1217 20:02:51.889991  663785 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1217 20:02:51.934517  663785 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1217 20:02:51.934542  663785 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1217 20:02:51.966016  663785 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1217 20:02:51.966044  663785 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1217 20:02:51.985059  663785 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1217 20:02:51.985109  663785 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1217 20:02:52.004156  663785 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1217 20:02:52.004185  663785 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1217 20:02:52.026050  663785 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1217 20:02:52.026098  663785 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1217 20:02:52.043062  663785 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1217 20:02:53.240482  663785 node_ready.go:49] node "embed-certs-147021" is "Ready"
	I1217 20:02:53.240536  663785 node_ready.go:38] duration metric: took 1.425810707s for node "embed-certs-147021" to be "Ready" ...
	I1217 20:02:53.240556  663785 api_server.go:52] waiting for apiserver process to appear ...
	I1217 20:02:53.240617  663785 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:02:49.126206  660659 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 502.111378ms
	I1217 20:02:49.130757  660659 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1217 20:02:49.130890  660659 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1217 20:02:49.131038  660659 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1217 20:02:49.131171  660659 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1217 20:02:51.339345  660659 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.208382366s
	I1217 20:02:51.759238  660659 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.628375171s
	I1217 20:02:53.632551  660659 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.501705543s
	I1217 20:02:53.653967  660659 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1217 20:02:53.669344  660659 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1217 20:02:53.684595  660659 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1217 20:02:53.684892  660659 kubeadm.go:319] [mark-control-plane] Marking the node auto-601560 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1217 20:02:53.696814  660659 kubeadm.go:319] [bootstrap-token] Using token: hb8mqj.5kxeg2f4381ik7ew
	I1217 20:02:53.698478  660659 out.go:252]   - Configuring RBAC rules ...
	I1217 20:02:53.698645  660659 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1217 20:02:53.706437  660659 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1217 20:02:53.715064  660659 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1217 20:02:53.717973  660659 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1217 20:02:53.721498  660659 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1217 20:02:53.725206  660659 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1217 20:02:53.907627  663785 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.076101302s)
	I1217 20:02:53.907941  663785 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.050353476s)
	I1217 20:02:53.908519  663785 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.865401392s)
	I1217 20:02:53.908627  663785 api_server.go:72] duration metric: took 2.355227594s to wait for apiserver process to appear ...
	I1217 20:02:53.908662  663785 api_server.go:88] waiting for apiserver healthz status ...
	I1217 20:02:53.908709  663785 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1217 20:02:53.912012  663785 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-147021 addons enable metrics-server
	
	I1217 20:02:53.917501  663785 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1217 20:02:53.917544  663785 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1217 20:02:53.930572  663785 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1217 20:02:54.040621  660659 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1217 20:02:54.467513  660659 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1217 20:02:55.039280  660659 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1217 20:02:55.040168  660659 kubeadm.go:319] 
	I1217 20:02:55.040298  660659 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1217 20:02:55.040309  660659 kubeadm.go:319] 
	I1217 20:02:55.040421  660659 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1217 20:02:55.040431  660659 kubeadm.go:319] 
	I1217 20:02:55.040473  660659 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1217 20:02:55.040579  660659 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1217 20:02:55.040678  660659 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1217 20:02:55.040689  660659 kubeadm.go:319] 
	I1217 20:02:55.040776  660659 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1217 20:02:55.040783  660659 kubeadm.go:319] 
	I1217 20:02:55.040840  660659 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1217 20:02:55.040849  660659 kubeadm.go:319] 
	I1217 20:02:55.040915  660659 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1217 20:02:55.041018  660659 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1217 20:02:55.041167  660659 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1217 20:02:55.041178  660659 kubeadm.go:319] 
	I1217 20:02:55.041301  660659 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1217 20:02:55.041442  660659 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1217 20:02:55.041458  660659 kubeadm.go:319] 
	I1217 20:02:55.041600  660659 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token hb8mqj.5kxeg2f4381ik7ew \
	I1217 20:02:55.041749  660659 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:8ef867ecc15c7bd9eb9f87ba84e4b5e1f9c90bbe1fbebab60bd7b5b08cd9129f \
	I1217 20:02:55.041801  660659 kubeadm.go:319] 	--control-plane 
	I1217 20:02:55.041828  660659 kubeadm.go:319] 
	I1217 20:02:55.041970  660659 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1217 20:02:55.041982  660659 kubeadm.go:319] 
	I1217 20:02:55.042143  660659 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token hb8mqj.5kxeg2f4381ik7ew \
	I1217 20:02:55.042277  660659 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:8ef867ecc15c7bd9eb9f87ba84e4b5e1f9c90bbe1fbebab60bd7b5b08cd9129f 
	I1217 20:02:55.044669  660659 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1045-gcp\n", err: exit status 1
	I1217 20:02:55.044784  660659 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1217 20:02:55.044829  660659 cni.go:84] Creating CNI manager for ""
	I1217 20:02:55.044849  660659 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1217 20:02:55.047329  660659 out.go:179] * Configuring CNI (Container Networking Interface) ...
	
	
	==> CRI-O <==
	Dec 17 20:02:28 default-k8s-diff-port-759234 crio[561]: time="2025-12-17T20:02:28.832568303Z" level=info msg="Started container" PID=1752 containerID=b38a80037849b30cd2cf40d496fdbb749638f3e661012a07d850981750660548 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-x5gq4/dashboard-metrics-scraper id=12a901bb-63c4-4700-8f2c-a58a2f23bb1d name=/runtime.v1.RuntimeService/StartContainer sandboxID=4908308aab9f665efa97273fe148688523a28e81a689c272e813270866425344
	Dec 17 20:02:28 default-k8s-diff-port-759234 crio[561]: time="2025-12-17T20:02:28.885703016Z" level=info msg="Removing container: 0273700cffedc2f692210434517e474073497b4ed366fd101d1863daa1e5fb9e" id=7e49898b-9e09-46bd-8e9e-a610506dc632 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 17 20:02:28 default-k8s-diff-port-759234 crio[561]: time="2025-12-17T20:02:28.907474546Z" level=info msg="Removed container 0273700cffedc2f692210434517e474073497b4ed366fd101d1863daa1e5fb9e: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-x5gq4/dashboard-metrics-scraper" id=7e49898b-9e09-46bd-8e9e-a610506dc632 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 17 20:02:36 default-k8s-diff-port-759234 crio[561]: time="2025-12-17T20:02:36.908561756Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=e2360bed-1f23-4a3e-880c-94b725861ca9 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 20:02:36 default-k8s-diff-port-759234 crio[561]: time="2025-12-17T20:02:36.938494629Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=82e5756a-f258-4d4e-b1ef-42c218919ae8 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 20:02:36 default-k8s-diff-port-759234 crio[561]: time="2025-12-17T20:02:36.939838651Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=90605ae6-10a5-484a-a470-533f41a2e36c name=/runtime.v1.RuntimeService/CreateContainer
	Dec 17 20:02:36 default-k8s-diff-port-759234 crio[561]: time="2025-12-17T20:02:36.940021465Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 20:02:37 default-k8s-diff-port-759234 crio[561]: time="2025-12-17T20:02:37.125914704Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 20:02:37 default-k8s-diff-port-759234 crio[561]: time="2025-12-17T20:02:37.126194368Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/c39a6b737bb47e246f2c14558fbb5573c2aa2aaa957c99a0a355b97fd6ead6b2/merged/etc/passwd: no such file or directory"
	Dec 17 20:02:37 default-k8s-diff-port-759234 crio[561]: time="2025-12-17T20:02:37.126246812Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/c39a6b737bb47e246f2c14558fbb5573c2aa2aaa957c99a0a355b97fd6ead6b2/merged/etc/group: no such file or directory"
	Dec 17 20:02:37 default-k8s-diff-port-759234 crio[561]: time="2025-12-17T20:02:37.126500173Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 20:02:37 default-k8s-diff-port-759234 crio[561]: time="2025-12-17T20:02:37.252426036Z" level=info msg="Created container b5fa7a549a8d242b4e3f2ea7764d147d6815e6a2a703c84f65f2d3f1d871969f: kube-system/storage-provisioner/storage-provisioner" id=90605ae6-10a5-484a-a470-533f41a2e36c name=/runtime.v1.RuntimeService/CreateContainer
	Dec 17 20:02:37 default-k8s-diff-port-759234 crio[561]: time="2025-12-17T20:02:37.253231454Z" level=info msg="Starting container: b5fa7a549a8d242b4e3f2ea7764d147d6815e6a2a703c84f65f2d3f1d871969f" id=24f608db-c308-4e8f-a1d5-78909cdfc4b6 name=/runtime.v1.RuntimeService/StartContainer
	Dec 17 20:02:37 default-k8s-diff-port-759234 crio[561]: time="2025-12-17T20:02:37.255436267Z" level=info msg="Started container" PID=1766 containerID=b5fa7a549a8d242b4e3f2ea7764d147d6815e6a2a703c84f65f2d3f1d871969f description=kube-system/storage-provisioner/storage-provisioner id=24f608db-c308-4e8f-a1d5-78909cdfc4b6 name=/runtime.v1.RuntimeService/StartContainer sandboxID=49d55feebb0b073ce42d0a893b9de78056480bc28ea663c3f29f72ae7e3c4694
	Dec 17 20:02:50 default-k8s-diff-port-759234 crio[561]: time="2025-12-17T20:02:50.776728185Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=176a73a5-965a-4449-8957-d4bf4f47871d name=/runtime.v1.ImageService/ImageStatus
	Dec 17 20:02:50 default-k8s-diff-port-759234 crio[561]: time="2025-12-17T20:02:50.778692197Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=ee68e35f-0f3d-4054-8402-2b41bd8af59f name=/runtime.v1.ImageService/ImageStatus
	Dec 17 20:02:50 default-k8s-diff-port-759234 crio[561]: time="2025-12-17T20:02:50.77982742Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-x5gq4/dashboard-metrics-scraper" id=fdd908fa-6752-4cb2-9eb4-4d8d0f31ad49 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 17 20:02:50 default-k8s-diff-port-759234 crio[561]: time="2025-12-17T20:02:50.779966778Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 20:02:50 default-k8s-diff-port-759234 crio[561]: time="2025-12-17T20:02:50.787280327Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 20:02:50 default-k8s-diff-port-759234 crio[561]: time="2025-12-17T20:02:50.788139047Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 20:02:50 default-k8s-diff-port-759234 crio[561]: time="2025-12-17T20:02:50.838823877Z" level=info msg="Created container cc3524e5a1365cf580ba863f7b11ab20cf3c5c9edb4e476ed6ee32739539386f: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-x5gq4/dashboard-metrics-scraper" id=fdd908fa-6752-4cb2-9eb4-4d8d0f31ad49 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 17 20:02:50 default-k8s-diff-port-759234 crio[561]: time="2025-12-17T20:02:50.839750581Z" level=info msg="Starting container: cc3524e5a1365cf580ba863f7b11ab20cf3c5c9edb4e476ed6ee32739539386f" id=cae4bf05-6514-4d87-9a28-e57264885f43 name=/runtime.v1.RuntimeService/StartContainer
	Dec 17 20:02:50 default-k8s-diff-port-759234 crio[561]: time="2025-12-17T20:02:50.842450716Z" level=info msg="Started container" PID=1802 containerID=cc3524e5a1365cf580ba863f7b11ab20cf3c5c9edb4e476ed6ee32739539386f description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-x5gq4/dashboard-metrics-scraper id=cae4bf05-6514-4d87-9a28-e57264885f43 name=/runtime.v1.RuntimeService/StartContainer sandboxID=4908308aab9f665efa97273fe148688523a28e81a689c272e813270866425344
	Dec 17 20:02:50 default-k8s-diff-port-759234 crio[561]: time="2025-12-17T20:02:50.955857561Z" level=info msg="Removing container: b38a80037849b30cd2cf40d496fdbb749638f3e661012a07d850981750660548" id=0aeef156-3fb7-4755-a1ee-285c93ac8947 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 17 20:02:50 default-k8s-diff-port-759234 crio[561]: time="2025-12-17T20:02:50.96930769Z" level=info msg="Removed container b38a80037849b30cd2cf40d496fdbb749638f3e661012a07d850981750660548: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-x5gq4/dashboard-metrics-scraper" id=0aeef156-3fb7-4755-a1ee-285c93ac8947 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                                    NAMESPACE
	cc3524e5a1365       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           6 seconds ago       Exited              dashboard-metrics-scraper   3                   4908308aab9f6       dashboard-metrics-scraper-6ffb444bf9-x5gq4             kubernetes-dashboard
	b5fa7a549a8d2       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           20 seconds ago      Running             storage-provisioner         1                   49d55feebb0b0       storage-provisioner                                    kube-system
	f01e59b3a5bec       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   42 seconds ago      Running             kubernetes-dashboard        0                   815e22192fce1       kubernetes-dashboard-855c9754f9-7lcjb                  kubernetes-dashboard
	f495d818556a6       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           51 seconds ago      Running             busybox                     1                   885f985ad7169       busybox                                                default
	1f92b0022b9d9       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           51 seconds ago      Running             coredns                     0                   bab7472b60116       coredns-66bc5c9577-lv4jd                               kube-system
	b6958cd5a4d6c       36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691                                           51 seconds ago      Running             kube-proxy                  0                   8980982c5b9f0       kube-proxy-ztxcd                                       kube-system
	5c35d460d84a2       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           51 seconds ago      Exited              storage-provisioner         0                   49d55feebb0b0       storage-provisioner                                    kube-system
	ff749e52a1c7b       4921d7a6dffa922dd679732ba4797085c4f39e9a53bee8b6fdb1d463e8571251                                           51 seconds ago      Running             kindnet-cni                 0                   a8018d2d3cff7       kindnet-dcwlb                                          kube-system
	d83a0fe0ebf9e       aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c                                           54 seconds ago      Running             kube-apiserver              0                   11954d3b28d67       kube-apiserver-default-k8s-diff-port-759234            kube-system
	13df285326623       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                           54 seconds ago      Running             etcd                        0                   b9b23e72adb41       etcd-default-k8s-diff-port-759234                      kube-system
	85ffda0bbbbe8       5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942                                           54 seconds ago      Running             kube-controller-manager     0                   5043885fb0536       kube-controller-manager-default-k8s-diff-port-759234   kube-system
	4d360a4c3fd6f       aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78                                           54 seconds ago      Running             kube-scheduler              0                   77b85d1ab6cbd       kube-scheduler-default-k8s-diff-port-759234            kube-system
	
	
	==> coredns [1f92b0022b9d9a916df843f4334eb7bbb4b21ace14628e070640e5df15619f23] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = c7556d8fdf49c5e32a9077be8cfb9fc6947bb07e663a10d55b192eb63ad1f2bd9793e8e5f5a36fc9abb1957831eec5c997fd9821790e3990ae9531bf41ecea37
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:53007 - 44118 "HINFO IN 2440421156102950590.91312000856898436. udp 55 false 512" NXDOMAIN qr,rd,ra 130 0.022480456s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-759234
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-759234
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2e96f676eb7e96389e85fe0658a4ede4c4ba6924
	                    minikube.k8s.io/name=default-k8s-diff-port-759234
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_17T20_01_06_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Dec 2025 20:01:02 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-759234
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Dec 2025 20:02:45 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Dec 2025 20:02:35 +0000   Wed, 17 Dec 2025 20:01:01 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Dec 2025 20:02:35 +0000   Wed, 17 Dec 2025 20:01:01 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Dec 2025 20:02:35 +0000   Wed, 17 Dec 2025 20:01:01 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Dec 2025 20:02:35 +0000   Wed, 17 Dec 2025 20:01:23 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    default-k8s-diff-port-759234
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 99cc213c06a11cdf07b2a4d26942818a
	  System UUID:                db8290dd-36ef-4726-9d3e-6ea726055ffb
	  Boot ID:                    832664c8-407a-4bff-a432-3bbc3f20421e
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.3
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         91s
	  kube-system                 coredns-66bc5c9577-lv4jd                                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     107s
	  kube-system                 etcd-default-k8s-diff-port-759234                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         112s
	  kube-system                 kindnet-dcwlb                                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      107s
	  kube-system                 kube-apiserver-default-k8s-diff-port-759234             250m (3%)     0 (0%)      0 (0%)           0 (0%)         112s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-759234    200m (2%)     0 (0%)      0 (0%)           0 (0%)         112s
	  kube-system                 kube-proxy-ztxcd                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         107s
	  kube-system                 kube-scheduler-default-k8s-diff-port-759234             100m (1%)     0 (0%)      0 (0%)           0 (0%)         112s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         106s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-x5gq4              0 (0%)        0 (0%)      0 (0%)           0 (0%)         48s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-7lcjb                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         48s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 106s               kube-proxy       
	  Normal  Starting                 51s                kube-proxy       
	  Normal  NodeHasSufficientMemory  112s               kubelet          Node default-k8s-diff-port-759234 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    112s               kubelet          Node default-k8s-diff-port-759234 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     112s               kubelet          Node default-k8s-diff-port-759234 status is now: NodeHasSufficientPID
	  Normal  Starting                 112s               kubelet          Starting kubelet.
	  Normal  RegisteredNode           108s               node-controller  Node default-k8s-diff-port-759234 event: Registered Node default-k8s-diff-port-759234 in Controller
	  Normal  NodeReady                94s                kubelet          Node default-k8s-diff-port-759234 status is now: NodeReady
	  Normal  Starting                 55s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  55s (x8 over 55s)  kubelet          Node default-k8s-diff-port-759234 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    55s (x8 over 55s)  kubelet          Node default-k8s-diff-port-759234 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     55s (x8 over 55s)  kubelet          Node default-k8s-diff-port-759234 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           49s                node-controller  Node default-k8s-diff-port-759234 event: Registered Node default-k8s-diff-port-759234 in Controller
	
	
	==> dmesg <==
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 02 bf cf fd 8a f3 08 06
	[  +0.000372] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 46 d7 50 f9 50 96 08 06
	[Dec17 19:26] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000011] ll header: 00000000: 12 b8 6e 1b fb 93 de a2 46 23 bd 1e 08 00
	[  +1.015318] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 12 b8 6e 1b fb 93 de a2 46 23 bd 1e 08 00
	[  +1.023837] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 12 b8 6e 1b fb 93 de a2 46 23 bd 1e 08 00
	[  +1.023872] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 12 b8 6e 1b fb 93 de a2 46 23 bd 1e 08 00
	[  +1.023881] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 12 b8 6e 1b fb 93 de a2 46 23 bd 1e 08 00
	[  +1.023899] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 12 b8 6e 1b fb 93 de a2 46 23 bd 1e 08 00
	[  +2.047807] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: 12 b8 6e 1b fb 93 de a2 46 23 bd 1e 08 00
	[  +4.031540] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000022] ll header: 00000000: 12 b8 6e 1b fb 93 de a2 46 23 bd 1e 08 00
	[  +8.319118] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: 12 b8 6e 1b fb 93 de a2 46 23 bd 1e 08 00
	[ +16.382218] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 12 b8 6e 1b fb 93 de a2 46 23 bd 1e 08 00
	[Dec17 19:27] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 12 b8 6e 1b fb 93 de a2 46 23 bd 1e 08 00
	
	
	==> etcd [13df2853266238c53f3daab51af6a83329ec267b44072f537e38af71a0078c3f] <==
	{"level":"warn","ts":"2025-12-17T20:02:04.573056Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54978","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T20:02:04.587324Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55004","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T20:02:04.594995Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55028","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T20:02:04.603974Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55032","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T20:02:04.612533Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55058","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T20:02:04.619973Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55064","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T20:02:04.626749Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55082","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T20:02:04.633968Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55092","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T20:02:04.641943Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55106","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T20:02:04.650762Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55128","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T20:02:04.660659Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55146","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T20:02:04.666350Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55162","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T20:02:04.673983Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55172","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T20:02:04.697352Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55184","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T20:02:04.705007Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55202","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T20:02:04.712132Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55234","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T20:02:04.760173Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55250","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-17T20:02:36.232680Z","caller":"traceutil/trace.go:172","msg":"trace[173342055] transaction","detail":"{read_only:false; response_revision:618; number_of_response:1; }","duration":"115.109336ms","start":"2025-12-17T20:02:36.117551Z","end":"2025-12-17T20:02:36.232660Z","steps":["trace[173342055] 'process raft request'  (duration: 115.066663ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-17T20:02:36.232685Z","caller":"traceutil/trace.go:172","msg":"trace[1087504951] transaction","detail":"{read_only:false; response_revision:617; number_of_response:1; }","duration":"168.637004ms","start":"2025-12-17T20:02:36.064025Z","end":"2025-12-17T20:02:36.232662Z","steps":["trace[1087504951] 'process raft request'  (duration: 119.697294ms)","trace[1087504951] 'compare'  (duration: 48.790717ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-17T20:02:37.255610Z","caller":"traceutil/trace.go:172","msg":"trace[1589292807] transaction","detail":"{read_only:false; response_revision:621; number_of_response:1; }","duration":"340.306273ms","start":"2025-12-17T20:02:36.915285Z","end":"2025-12-17T20:02:37.255591Z","steps":["trace[1589292807] 'process raft request'  (duration: 324.696059ms)","trace[1589292807] 'compare'  (duration: 15.265287ms)"],"step_count":2}
	{"level":"warn","ts":"2025-12-17T20:02:37.256028Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-12-17T20:02:36.915271Z","time spent":"340.402373ms","remote":"127.0.0.1:54454","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":4620,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/storage-provisioner\" mod_revision:479 > success:<request_put:<key:\"/registry/pods/kube-system/storage-provisioner\" value_size:4566 >> failure:<request_range:<key:\"/registry/pods/kube-system/storage-provisioner\" > >"}
	{"level":"warn","ts":"2025-12-17T20:02:37.514966Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"198.204146ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/coredns-66bc5c9577-lv4jd\" limit:1 ","response":"range_response_count:1 size:5944"}
	{"level":"info","ts":"2025-12-17T20:02:37.515131Z","caller":"traceutil/trace.go:172","msg":"trace[1645125940] range","detail":"{range_begin:/registry/pods/kube-system/coredns-66bc5c9577-lv4jd; range_end:; response_count:1; response_revision:621; }","duration":"198.34266ms","start":"2025-12-17T20:02:37.316729Z","end":"2025-12-17T20:02:37.515072Z","steps":["trace[1645125940] 'agreement among raft nodes before linearized reading'  (duration: 63.542438ms)","trace[1645125940] 'range keys from in-memory index tree'  (duration: 134.544753ms)"],"step_count":2}
	{"level":"warn","ts":"2025-12-17T20:02:37.515217Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"134.658384ms","expected-duration":"100ms","prefix":"","request":"header:<ID:6571766902728148880 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/kube-system/storage-provisioner.1882193695ac5561\" mod_revision:521 > success:<request_put:<key:\"/registry/events/kube-system/storage-provisioner.1882193695ac5561\" value_size:689 lease:6571766902728148355 >> failure:<request_range:<key:\"/registry/events/kube-system/storage-provisioner.1882193695ac5561\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-12-17T20:02:37.515296Z","caller":"traceutil/trace.go:172","msg":"trace[459163471] transaction","detail":"{read_only:false; response_revision:622; number_of_response:1; }","duration":"258.044415ms","start":"2025-12-17T20:02:37.257240Z","end":"2025-12-17T20:02:37.515284Z","steps":["trace[459163471] 'process raft request'  (duration: 123.074281ms)","trace[459163471] 'compare'  (duration: 134.480392ms)"],"step_count":2}
	
	
	==> kernel <==
	 20:02:57 up  1:45,  0 user,  load average: 5.49, 3.88, 2.64
	Linux default-k8s-diff-port-759234 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [ff749e52a1c7b238ec2a3b689c2471463861c44182ba71da511bc1f90ba22d68] <==
	I1217 20:02:06.369690       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1217 20:02:06.369959       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1217 20:02:06.370179       1 main.go:148] setting mtu 1500 for CNI 
	I1217 20:02:06.370207       1 main.go:178] kindnetd IP family: "ipv4"
	I1217 20:02:06.370233       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-17T20:02:06Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1217 20:02:06.573339       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1217 20:02:06.573469       1 controller.go:381] "Waiting for informer caches to sync"
	I1217 20:02:06.573486       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1217 20:02:06.573715       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1217 20:02:07.065752       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1217 20:02:07.065806       1 metrics.go:72] Registering metrics
	I1217 20:02:07.065916       1 controller.go:711] "Syncing nftables rules"
	I1217 20:02:16.573274       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1217 20:02:16.573344       1 main.go:301] handling current node
	I1217 20:02:26.581201       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1217 20:02:26.581241       1 main.go:301] handling current node
	I1217 20:02:36.573378       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1217 20:02:36.573470       1 main.go:301] handling current node
	I1217 20:02:46.577655       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1217 20:02:46.577692       1 main.go:301] handling current node
	I1217 20:02:56.577719       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1217 20:02:56.577903       1 main.go:301] handling current node
	
	
	==> kube-apiserver [d83a0fe0ebf9e431abfef83125000274ec881515d8b2fe37492a61682b8b7a56] <==
	I1217 20:02:05.328557       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1217 20:02:05.328576       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1217 20:02:05.328599       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1217 20:02:05.328661       1 aggregator.go:171] initial CRD sync complete...
	I1217 20:02:05.328669       1 autoregister_controller.go:144] Starting autoregister controller
	I1217 20:02:05.328674       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1217 20:02:05.328680       1 cache.go:39] Caches are synced for autoregister controller
	I1217 20:02:05.328996       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1217 20:02:05.329052       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1217 20:02:05.329058       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1217 20:02:05.337277       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1217 20:02:05.368274       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1217 20:02:05.376201       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1217 20:02:05.586336       1 controller.go:667] quota admission added evaluator for: namespaces
	I1217 20:02:05.616692       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1217 20:02:05.635369       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1217 20:02:05.643330       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1217 20:02:05.652264       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1217 20:02:05.686811       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.97.125.218"}
	I1217 20:02:05.696675       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.101.16.150"}
	I1217 20:02:06.232056       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1217 20:02:08.711987       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1217 20:02:08.907965       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1217 20:02:09.258172       1 controller.go:667] quota admission added evaluator for: endpoints
	I1217 20:02:09.258172       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [85ffda0bbbbe80bde1d1c7403094674a0f0d609d5aa8572f8c470fd845327c85] <==
	I1217 20:02:08.629495       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1217 20:02:08.643842       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1217 20:02:08.646989       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1217 20:02:08.649343       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1217 20:02:08.649358       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1217 20:02:08.650576       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1217 20:02:08.650600       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1217 20:02:08.652928       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1217 20:02:08.653849       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1217 20:02:08.653874       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1217 20:02:08.655047       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1217 20:02:08.655096       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1217 20:02:08.655142       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1217 20:02:08.655146       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1217 20:02:08.655170       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1217 20:02:08.655176       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1217 20:02:08.655187       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1217 20:02:08.660652       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1217 20:02:08.660695       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1217 20:02:08.660747       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1217 20:02:08.660808       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1217 20:02:08.660818       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1217 20:02:08.660826       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1217 20:02:08.670891       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1217 20:02:08.673202       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [b6958cd5a4d6c327cfb1850926f770862f2ba4f2b196595b819413ce72236040] <==
	I1217 20:02:06.157773       1 server_linux.go:53] "Using iptables proxy"
	I1217 20:02:06.246478       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1217 20:02:06.347111       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1217 20:02:06.347158       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.94.2"]
	E1217 20:02:06.347285       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1217 20:02:06.370538       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1217 20:02:06.370645       1 server_linux.go:132] "Using iptables Proxier"
	I1217 20:02:06.377302       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1217 20:02:06.377753       1 server.go:527] "Version info" version="v1.34.3"
	I1217 20:02:06.377790       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1217 20:02:06.380476       1 config.go:200] "Starting service config controller"
	I1217 20:02:06.380521       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1217 20:02:06.380543       1 config.go:106] "Starting endpoint slice config controller"
	I1217 20:02:06.380548       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1217 20:02:06.380561       1 config.go:403] "Starting serviceCIDR config controller"
	I1217 20:02:06.380566       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1217 20:02:06.380623       1 config.go:309] "Starting node config controller"
	I1217 20:02:06.380639       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1217 20:02:06.380654       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1217 20:02:06.480636       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1217 20:02:06.480666       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1217 20:02:06.480638       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [4d360a4c3fd6f7b37c23d2fae6316c0a6398e536b4ed3c70d59262bc9cbab9c7] <==
	I1217 20:02:04.031926       1 serving.go:386] Generated self-signed cert in-memory
	I1217 20:02:05.340623       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.3"
	I1217 20:02:05.340660       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1217 20:02:05.346530       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1217 20:02:05.346545       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1217 20:02:05.346581       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1217 20:02:05.346586       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1217 20:02:05.346548       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1217 20:02:05.346650       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1217 20:02:05.347003       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1217 20:02:05.347229       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1217 20:02:05.446829       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1217 20:02:05.446865       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1217 20:02:05.446955       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	
	
	==> kubelet <==
	Dec 17 20:02:10 default-k8s-diff-port-759234 kubelet[721]: I1217 20:02:10.776156     721 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Dec 17 20:02:11 default-k8s-diff-port-759234 kubelet[721]: I1217 20:02:11.819888     721 scope.go:117] "RemoveContainer" containerID="955ea5817f1e8123df22b35943e083afd2bd7df677501593ab64e9e943f06bc1"
	Dec 17 20:02:12 default-k8s-diff-port-759234 kubelet[721]: I1217 20:02:12.825147     721 scope.go:117] "RemoveContainer" containerID="955ea5817f1e8123df22b35943e083afd2bd7df677501593ab64e9e943f06bc1"
	Dec 17 20:02:12 default-k8s-diff-port-759234 kubelet[721]: I1217 20:02:12.825722     721 scope.go:117] "RemoveContainer" containerID="0273700cffedc2f692210434517e474073497b4ed366fd101d1863daa1e5fb9e"
	Dec 17 20:02:12 default-k8s-diff-port-759234 kubelet[721]: E1217 20:02:12.826025     721 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-x5gq4_kubernetes-dashboard(f5a128b1-a105-4cdb-aa21-3f46e23e8ea6)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-x5gq4" podUID="f5a128b1-a105-4cdb-aa21-3f46e23e8ea6"
	Dec 17 20:02:13 default-k8s-diff-port-759234 kubelet[721]: I1217 20:02:13.830713     721 scope.go:117] "RemoveContainer" containerID="0273700cffedc2f692210434517e474073497b4ed366fd101d1863daa1e5fb9e"
	Dec 17 20:02:13 default-k8s-diff-port-759234 kubelet[721]: E1217 20:02:13.831390     721 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-x5gq4_kubernetes-dashboard(f5a128b1-a105-4cdb-aa21-3f46e23e8ea6)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-x5gq4" podUID="f5a128b1-a105-4cdb-aa21-3f46e23e8ea6"
	Dec 17 20:02:15 default-k8s-diff-port-759234 kubelet[721]: I1217 20:02:15.887124     721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-7lcjb" podStartSLOduration=1.200522431 podStartE2EDuration="6.887098261s" podCreationTimestamp="2025-12-17 20:02:09 +0000 UTC" firstStartedPulling="2025-12-17 20:02:09.503837796 +0000 UTC m=+6.822134741" lastFinishedPulling="2025-12-17 20:02:15.19041362 +0000 UTC m=+12.508710571" observedRunningTime="2025-12-17 20:02:15.886739508 +0000 UTC m=+13.205036468" watchObservedRunningTime="2025-12-17 20:02:15.887098261 +0000 UTC m=+13.205395222"
	Dec 17 20:02:16 default-k8s-diff-port-759234 kubelet[721]: I1217 20:02:16.056401     721 scope.go:117] "RemoveContainer" containerID="0273700cffedc2f692210434517e474073497b4ed366fd101d1863daa1e5fb9e"
	Dec 17 20:02:16 default-k8s-diff-port-759234 kubelet[721]: E1217 20:02:16.056622     721 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-x5gq4_kubernetes-dashboard(f5a128b1-a105-4cdb-aa21-3f46e23e8ea6)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-x5gq4" podUID="f5a128b1-a105-4cdb-aa21-3f46e23e8ea6"
	Dec 17 20:02:28 default-k8s-diff-port-759234 kubelet[721]: I1217 20:02:28.773833     721 scope.go:117] "RemoveContainer" containerID="0273700cffedc2f692210434517e474073497b4ed366fd101d1863daa1e5fb9e"
	Dec 17 20:02:28 default-k8s-diff-port-759234 kubelet[721]: I1217 20:02:28.882842     721 scope.go:117] "RemoveContainer" containerID="b38a80037849b30cd2cf40d496fdbb749638f3e661012a07d850981750660548"
	Dec 17 20:02:28 default-k8s-diff-port-759234 kubelet[721]: E1217 20:02:28.883183     721 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-x5gq4_kubernetes-dashboard(f5a128b1-a105-4cdb-aa21-3f46e23e8ea6)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-x5gq4" podUID="f5a128b1-a105-4cdb-aa21-3f46e23e8ea6"
	Dec 17 20:02:28 default-k8s-diff-port-759234 kubelet[721]: I1217 20:02:28.883915     721 scope.go:117] "RemoveContainer" containerID="0273700cffedc2f692210434517e474073497b4ed366fd101d1863daa1e5fb9e"
	Dec 17 20:02:36 default-k8s-diff-port-759234 kubelet[721]: I1217 20:02:36.056541     721 scope.go:117] "RemoveContainer" containerID="b38a80037849b30cd2cf40d496fdbb749638f3e661012a07d850981750660548"
	Dec 17 20:02:36 default-k8s-diff-port-759234 kubelet[721]: E1217 20:02:36.056839     721 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-x5gq4_kubernetes-dashboard(f5a128b1-a105-4cdb-aa21-3f46e23e8ea6)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-x5gq4" podUID="f5a128b1-a105-4cdb-aa21-3f46e23e8ea6"
	Dec 17 20:02:36 default-k8s-diff-port-759234 kubelet[721]: I1217 20:02:36.908121     721 scope.go:117] "RemoveContainer" containerID="5c35d460d84a27be34da42a759162cb5bc58518237744639622166b502cc652a"
	Dec 17 20:02:50 default-k8s-diff-port-759234 kubelet[721]: I1217 20:02:50.775855     721 scope.go:117] "RemoveContainer" containerID="b38a80037849b30cd2cf40d496fdbb749638f3e661012a07d850981750660548"
	Dec 17 20:02:50 default-k8s-diff-port-759234 kubelet[721]: I1217 20:02:50.953779     721 scope.go:117] "RemoveContainer" containerID="cc3524e5a1365cf580ba863f7b11ab20cf3c5c9edb4e476ed6ee32739539386f"
	Dec 17 20:02:50 default-k8s-diff-port-759234 kubelet[721]: I1217 20:02:50.953944     721 scope.go:117] "RemoveContainer" containerID="b38a80037849b30cd2cf40d496fdbb749638f3e661012a07d850981750660548"
	Dec 17 20:02:50 default-k8s-diff-port-759234 kubelet[721]: E1217 20:02:50.953957     721 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-x5gq4_kubernetes-dashboard(f5a128b1-a105-4cdb-aa21-3f46e23e8ea6)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-x5gq4" podUID="f5a128b1-a105-4cdb-aa21-3f46e23e8ea6"
	Dec 17 20:02:54 default-k8s-diff-port-759234 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 17 20:02:54 default-k8s-diff-port-759234 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 17 20:02:54 default-k8s-diff-port-759234 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 20:02:54 default-k8s-diff-port-759234 systemd[1]: kubelet.service: Consumed 1.869s CPU time.
	
	
	==> kubernetes-dashboard [f01e59b3a5bec96adc422b58a3f2d145f5ded1ce16afc6fa1bdf3418adf64dc8] <==
	2025/12/17 20:02:15 Using namespace: kubernetes-dashboard
	2025/12/17 20:02:15 Using in-cluster config to connect to apiserver
	2025/12/17 20:02:15 Using secret token for csrf signing
	2025/12/17 20:02:15 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/12/17 20:02:15 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/12/17 20:02:15 Successful initial request to the apiserver, version: v1.34.3
	2025/12/17 20:02:15 Generating JWE encryption key
	2025/12/17 20:02:15 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/12/17 20:02:15 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/12/17 20:02:15 Initializing JWE encryption key from synchronized object
	2025/12/17 20:02:15 Creating in-cluster Sidecar client
	2025/12/17 20:02:15 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/17 20:02:15 Serving insecurely on HTTP port: 9090
	2025/12/17 20:02:45 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/17 20:02:15 Starting overwatch
	
	
	==> storage-provisioner [5c35d460d84a27be34da42a759162cb5bc58518237744639622166b502cc652a] <==
	I1217 20:02:06.129244       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1217 20:02:36.132634       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [b5fa7a549a8d242b4e3f2ea7764d147d6815e6a2a703c84f65f2d3f1d871969f] <==
	I1217 20:02:37.560402       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1217 20:02:37.568552       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1217 20:02:37.568612       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1217 20:02:37.570983       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 20:02:41.025793       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 20:02:45.287017       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 20:02:48.885683       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 20:02:51.939308       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 20:02:54.961793       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 20:02:54.966984       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1217 20:02:54.967160       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1217 20:02:54.967275       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"ef3dee07-d1ce-418e-a6ba-4a2d4546a253", APIVersion:"v1", ResourceVersion:"638", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-759234_53320cc2-2157-4e67-a487-aa131a78f9f7 became leader
	I1217 20:02:54.967336       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-759234_53320cc2-2157-4e67-a487-aa131a78f9f7!
	W1217 20:02:54.971358       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 20:02:54.975624       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1217 20:02:55.067754       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-759234_53320cc2-2157-4e67-a487-aa131a78f9f7!
	W1217 20:02:56.978356       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 20:02:56.982550       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-759234 -n default-k8s-diff-port-759234
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-759234 -n default-k8s-diff-port-759234: exit status 2 (476.938783ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context default-k8s-diff-port-759234 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect default-k8s-diff-port-759234
helpers_test.go:244: (dbg) docker inspect default-k8s-diff-port-759234:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "fb8483ff8e2a14d378d4db3e15e7b37fbb77525e29d99d5e1de222fe462790b8",
	        "Created": "2025-12-17T20:00:47.282778313Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 649426,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-17T20:01:56.144994012Z",
	            "FinishedAt": "2025-12-17T20:01:54.925996647Z"
	        },
	        "Image": "sha256:e3abeb065413b7566dd42e98e204ab3ad174790743f1f5cd427036c11b49d7f1",
	        "ResolvConfPath": "/var/lib/docker/containers/fb8483ff8e2a14d378d4db3e15e7b37fbb77525e29d99d5e1de222fe462790b8/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/fb8483ff8e2a14d378d4db3e15e7b37fbb77525e29d99d5e1de222fe462790b8/hostname",
	        "HostsPath": "/var/lib/docker/containers/fb8483ff8e2a14d378d4db3e15e7b37fbb77525e29d99d5e1de222fe462790b8/hosts",
	        "LogPath": "/var/lib/docker/containers/fb8483ff8e2a14d378d4db3e15e7b37fbb77525e29d99d5e1de222fe462790b8/fb8483ff8e2a14d378d4db3e15e7b37fbb77525e29d99d5e1de222fe462790b8-json.log",
	        "Name": "/default-k8s-diff-port-759234",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-759234:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default-k8s-diff-port-759234",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "fb8483ff8e2a14d378d4db3e15e7b37fbb77525e29d99d5e1de222fe462790b8",
	                "LowerDir": "/var/lib/docker/overlay2/7843654506f5a98613c1255e49abf23e4cc9d5b1f941075f03bad1d85596baa7-init/diff:/var/lib/docker/overlay2/29727d664a8119dcd8d22d923cfdfa7d86f99088879bf2a113d907b51116eb38/diff",
	                "MergedDir": "/var/lib/docker/overlay2/7843654506f5a98613c1255e49abf23e4cc9d5b1f941075f03bad1d85596baa7/merged",
	                "UpperDir": "/var/lib/docker/overlay2/7843654506f5a98613c1255e49abf23e4cc9d5b1f941075f03bad1d85596baa7/diff",
	                "WorkDir": "/var/lib/docker/overlay2/7843654506f5a98613c1255e49abf23e4cc9d5b1f941075f03bad1d85596baa7/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-759234",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-759234/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-759234",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-759234",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-759234",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "6531244d21d300d973005bf3fd3904c3b327673ec76c465093a7f6a16906e5ff",
	            "SandboxKey": "/var/run/docker/netns/6531244d21d3",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33468"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33469"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33472"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33470"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33471"
	                    }
	                ]
	            },
	            "Networks": {
	                "default-k8s-diff-port-759234": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "034e5df717c044fefebfa38f3b7a5265a61b576bc983becdb12880ee6b18c027",
	                    "EndpointID": "c984acaeecfb9ebe7f9636503957058d8178a99e1a6f6842245d79e2728d0547",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "MacAddress": "e6:83:5b:c0:15:52",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-759234",
	                        "fb8483ff8e2a"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-759234 -n default-k8s-diff-port-759234
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-759234 -n default-k8s-diff-port-759234: exit status 2 (433.535209ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-759234 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-759234 logs -n 25: (1.821110095s)
helpers_test.go:261: TestStartStop/group/default-k8s-diff-port/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────
────────────┐
	│ COMMAND │                                                                                                                        ARGS                                                                                                                        │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────
────────────┤
	│ delete  │ -p old-k8s-version-894575                                                                                                                                                                                                                          │ old-k8s-version-894575       │ jenkins │ v1.37.0 │ 17 Dec 25 20:01 UTC │ 17 Dec 25 20:01 UTC │
	│ start   │ -p embed-certs-147021 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3                                                                                             │ embed-certs-147021           │ jenkins │ v1.37.0 │ 17 Dec 25 20:01 UTC │ 17 Dec 25 20:02 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-759234 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                 │ default-k8s-diff-port-759234 │ jenkins │ v1.37.0 │ 17 Dec 25 20:01 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-759234 --alsologtostderr -v=3                                                                                                                                                                                             │ default-k8s-diff-port-759234 │ jenkins │ v1.37.0 │ 17 Dec 25 20:01 UTC │ 17 Dec 25 20:01 UTC │
	│ addons  │ enable metrics-server -p newest-cni-420762 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                            │ newest-cni-420762            │ jenkins │ v1.37.0 │ 17 Dec 25 20:01 UTC │                     │
	│ addons  │ enable dashboard -p default-k8s-diff-port-759234 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                            │ default-k8s-diff-port-759234 │ jenkins │ v1.37.0 │ 17 Dec 25 20:01 UTC │ 17 Dec 25 20:01 UTC │
	│ start   │ -p default-k8s-diff-port-759234 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3                                                                           │ default-k8s-diff-port-759234 │ jenkins │ v1.37.0 │ 17 Dec 25 20:01 UTC │ 17 Dec 25 20:02 UTC │
	│ stop    │ -p newest-cni-420762 --alsologtostderr -v=3                                                                                                                                                                                                        │ newest-cni-420762            │ jenkins │ v1.37.0 │ 17 Dec 25 20:01 UTC │ 17 Dec 25 20:02 UTC │
	│ addons  │ enable dashboard -p newest-cni-420762 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                       │ newest-cni-420762            │ jenkins │ v1.37.0 │ 17 Dec 25 20:02 UTC │ 17 Dec 25 20:02 UTC │
	│ start   │ -p newest-cni-420762 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1 │ newest-cni-420762            │ jenkins │ v1.37.0 │ 17 Dec 25 20:02 UTC │ 17 Dec 25 20:02 UTC │
	│ addons  │ enable metrics-server -p embed-certs-147021 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                           │ embed-certs-147021           │ jenkins │ v1.37.0 │ 17 Dec 25 20:02 UTC │                     │
	│ start   │ -p kubernetes-upgrade-322567 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio                                                                                                                                  │ kubernetes-upgrade-322567    │ jenkins │ v1.37.0 │ 17 Dec 25 20:02 UTC │                     │
	│ start   │ -p kubernetes-upgrade-322567 --memory=3072 --kubernetes-version=v1.35.0-rc.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-322567    │ jenkins │ v1.37.0 │ 17 Dec 25 20:02 UTC │ 17 Dec 25 20:02 UTC │
	│ stop    │ -p embed-certs-147021 --alsologtostderr -v=3                                                                                                                                                                                                       │ embed-certs-147021           │ jenkins │ v1.37.0 │ 17 Dec 25 20:02 UTC │ 17 Dec 25 20:02 UTC │
	│ image   │ newest-cni-420762 image list --format=json                                                                                                                                                                                                         │ newest-cni-420762            │ jenkins │ v1.37.0 │ 17 Dec 25 20:02 UTC │ 17 Dec 25 20:02 UTC │
	│ pause   │ -p newest-cni-420762 --alsologtostderr -v=1                                                                                                                                                                                                        │ newest-cni-420762            │ jenkins │ v1.37.0 │ 17 Dec 25 20:02 UTC │                     │
	│ delete  │ -p kubernetes-upgrade-322567                                                                                                                                                                                                                       │ kubernetes-upgrade-322567    │ jenkins │ v1.37.0 │ 17 Dec 25 20:02 UTC │ 17 Dec 25 20:02 UTC │
	│ start   │ -p auto-601560 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio                                                                                                                            │ auto-601560                  │ jenkins │ v1.37.0 │ 17 Dec 25 20:02 UTC │                     │
	│ delete  │ -p newest-cni-420762                                                                                                                                                                                                                               │ newest-cni-420762            │ jenkins │ v1.37.0 │ 17 Dec 25 20:02 UTC │ 17 Dec 25 20:02 UTC │
	│ delete  │ -p newest-cni-420762                                                                                                                                                                                                                               │ newest-cni-420762            │ jenkins │ v1.37.0 │ 17 Dec 25 20:02 UTC │ 17 Dec 25 20:02 UTC │
	│ start   │ -p kindnet-601560 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio                                                                                                           │ kindnet-601560               │ jenkins │ v1.37.0 │ 17 Dec 25 20:02 UTC │                     │
	│ addons  │ enable dashboard -p embed-certs-147021 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                      │ embed-certs-147021           │ jenkins │ v1.37.0 │ 17 Dec 25 20:02 UTC │ 17 Dec 25 20:02 UTC │
	│ start   │ -p embed-certs-147021 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3                                                                                             │ embed-certs-147021           │ jenkins │ v1.37.0 │ 17 Dec 25 20:02 UTC │                     │
	│ image   │ default-k8s-diff-port-759234 image list --format=json                                                                                                                                                                                              │ default-k8s-diff-port-759234 │ jenkins │ v1.37.0 │ 17 Dec 25 20:02 UTC │ 17 Dec 25 20:02 UTC │
	│ pause   │ -p default-k8s-diff-port-759234 --alsologtostderr -v=1                                                                                                                                                                                             │ default-k8s-diff-port-759234 │ jenkins │ v1.37.0 │ 17 Dec 25 20:02 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────
────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/17 20:02:43
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1217 20:02:43.597307  663785 out.go:360] Setting OutFile to fd 1 ...
	I1217 20:02:43.597462  663785 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 20:02:43.597478  663785 out.go:374] Setting ErrFile to fd 2...
	I1217 20:02:43.597495  663785 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 20:02:43.597723  663785 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22186-372245/.minikube/bin
	I1217 20:02:43.598258  663785 out.go:368] Setting JSON to false
	I1217 20:02:43.599444  663785 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":6315,"bootTime":1765995449,"procs":304,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1217 20:02:43.599537  663785 start.go:143] virtualization: kvm guest
	I1217 20:02:43.601655  663785 out.go:179] * [embed-certs-147021] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1217 20:02:43.603367  663785 out.go:179]   - MINIKUBE_LOCATION=22186
	I1217 20:02:43.603423  663785 notify.go:221] Checking for updates...
	I1217 20:02:43.606402  663785 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 20:02:43.608803  663785 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22186-372245/kubeconfig
	I1217 20:02:43.612274  663785 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22186-372245/.minikube
	I1217 20:02:43.614281  663785 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1217 20:02:43.615778  663785 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 20:02:43.618527  663785 config.go:182] Loaded profile config "embed-certs-147021": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 20:02:43.619342  663785 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 20:02:43.650357  663785 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1217 20:02:43.650566  663785 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 20:02:43.720232  663785 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:73 OomKillDisable:false NGoroutines:90 SystemTime:2025-12-17 20:02:43.70903939 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1217 20:02:43.720389  663785 docker.go:319] overlay module found
	I1217 20:02:43.723268  663785 out.go:179] * Using the docker driver based on existing profile
	I1217 20:02:43.724535  663785 start.go:309] selected driver: docker
	I1217 20:02:43.724557  663785 start.go:927] validating driver "docker" against &{Name:embed-certs-147021 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:embed-certs-147021 Namespace:default APIServerHAVIP: APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 20:02:43.724681  663785 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 20:02:43.725432  663785 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 20:02:43.805834  663785 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:69 OomKillDisable:false NGoroutines:80 SystemTime:2025-12-17 20:02:43.785586246 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1217 20:02:43.806588  663785 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1217 20:02:43.806640  663785 cni.go:84] Creating CNI manager for ""
	I1217 20:02:43.806723  663785 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1217 20:02:43.806829  663785 start.go:353] cluster config:
	{Name:embed-certs-147021 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:embed-certs-147021 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 20:02:43.809493  663785 out.go:179] * Starting "embed-certs-147021" primary control-plane node in "embed-certs-147021" cluster
	I1217 20:02:43.811052  663785 cache.go:134] Beginning downloading kic base image for docker with crio
	I1217 20:02:43.812488  663785 out.go:179] * Pulling base image v0.0.48-1765966054-22186 ...
	I1217 20:02:43.480042  660659 cli_runner.go:164] Run: docker network inspect auto-601560 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1217 20:02:43.501548  660659 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1217 20:02:43.506583  660659 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 20:02:43.519861  660659 kubeadm.go:884] updating cluster {Name:auto-601560 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:auto-601560 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:
[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1217 20:02:43.519978  660659 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1217 20:02:43.520021  660659 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 20:02:43.558992  660659 crio.go:514] all images are preloaded for cri-o runtime.
	I1217 20:02:43.559021  660659 crio.go:433] Images already preloaded, skipping extraction
	I1217 20:02:43.559072  660659 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 20:02:43.588654  660659 crio.go:514] all images are preloaded for cri-o runtime.
	I1217 20:02:43.588677  660659 cache_images.go:86] Images are preloaded, skipping loading
	I1217 20:02:43.588687  660659 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.3 crio true true} ...
	I1217 20:02:43.588803  660659 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=auto-601560 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.3 ClusterName:auto-601560 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1217 20:02:43.588882  660659 ssh_runner.go:195] Run: crio config
	I1217 20:02:43.653811  660659 cni.go:84] Creating CNI manager for ""
	I1217 20:02:43.653835  660659 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1217 20:02:43.653858  660659 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1217 20:02:43.653913  660659 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:auto-601560 NodeName:auto-601560 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/m
anifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1217 20:02:43.654128  660659 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "auto-601560"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1217 20:02:43.654205  660659 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.3
	I1217 20:02:43.664467  660659 binaries.go:51] Found k8s binaries, skipping transfer
	I1217 20:02:43.664547  660659 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1217 20:02:43.675783  660659 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (361 bytes)
	I1217 20:02:43.694191  660659 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1217 20:02:43.714151  660659 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2207 bytes)
	I1217 20:02:43.730644  660659 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1217 20:02:43.734822  660659 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 20:02:43.749458  660659 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 20:02:43.813773  663785 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1217 20:02:43.813813  663785 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22186-372245/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4
	I1217 20:02:43.813827  663785 cache.go:65] Caching tarball of preloaded images
	I1217 20:02:43.813887  663785 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 in local docker daemon
	I1217 20:02:43.813957  663785 preload.go:238] Found /home/jenkins/minikube-integration/22186-372245/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1217 20:02:43.813974  663785 cache.go:68] Finished verifying existence of preloaded tar for v1.34.3 on crio
	I1217 20:02:43.814282  663785 profile.go:143] Saving config to /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/embed-certs-147021/config.json ...
	I1217 20:02:43.841065  663785 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 in local docker daemon, skipping pull
	I1217 20:02:43.841096  663785 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 exists in daemon, skipping load
	I1217 20:02:43.841119  663785 cache.go:243] Successfully downloaded all kic artifacts
	I1217 20:02:43.841160  663785 start.go:360] acquireMachinesLock for embed-certs-147021: {Name:mkc6328ab9d874d1f1fffe133279d94e48b1c6e9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 20:02:43.841261  663785 start.go:364] duration metric: took 51.764µs to acquireMachinesLock for "embed-certs-147021"
	I1217 20:02:43.841294  663785 start.go:96] Skipping create...Using existing machine configuration
	I1217 20:02:43.841305  663785 fix.go:54] fixHost starting: 
	I1217 20:02:43.841582  663785 cli_runner.go:164] Run: docker container inspect embed-certs-147021 --format={{.State.Status}}
	I1217 20:02:43.861935  663785 fix.go:112] recreateIfNeeded on embed-certs-147021: state=Stopped err=<nil>
	W1217 20:02:43.861983  663785 fix.go:138] unexpected machine state, will restart: <nil>
	I1217 20:02:43.869002  660659 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 20:02:43.890450  660659 certs.go:69] Setting up /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/auto-601560 for IP: 192.168.76.2
	I1217 20:02:43.890479  660659 certs.go:195] generating shared ca certs ...
	I1217 20:02:43.890502  660659 certs.go:227] acquiring lock for ca certs: {Name:mk6c0a4a99609de13fb0b54aca94f9165cc7856c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 20:02:43.890697  660659 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22186-372245/.minikube/ca.key
	I1217 20:02:43.890770  660659 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22186-372245/.minikube/proxy-client-ca.key
	I1217 20:02:43.890780  660659 certs.go:257] generating profile certs ...
	I1217 20:02:43.890856  660659 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/auto-601560/client.key
	I1217 20:02:43.890879  660659 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/auto-601560/client.crt with IP's: []
	I1217 20:02:43.964742  660659 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/auto-601560/client.crt ...
	I1217 20:02:43.964770  660659 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/auto-601560/client.crt: {Name:mk20ec0393b60e0059a93fa0ea47f7b86671a83c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 20:02:43.964940  660659 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/auto-601560/client.key ...
	I1217 20:02:43.964951  660659 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/auto-601560/client.key: {Name:mke6bb134cd14678ed704bb54f28dec2d4076df0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 20:02:43.965035  660659 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/auto-601560/apiserver.key.578310f4
	I1217 20:02:43.965049  660659 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/auto-601560/apiserver.crt.578310f4 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1217 20:02:44.032122  660659 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/auto-601560/apiserver.crt.578310f4 ...
	I1217 20:02:44.032198  660659 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/auto-601560/apiserver.crt.578310f4: {Name:mk8ed0294878a6563e8553297c9374261df588a5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 20:02:44.032385  660659 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/auto-601560/apiserver.key.578310f4 ...
	I1217 20:02:44.032404  660659 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/auto-601560/apiserver.key.578310f4: {Name:mke5fdbcc4fb97cd69180fb9179af9750210c230 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 20:02:44.032514  660659 certs.go:382] copying /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/auto-601560/apiserver.crt.578310f4 -> /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/auto-601560/apiserver.crt
	I1217 20:02:44.032620  660659 certs.go:386] copying /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/auto-601560/apiserver.key.578310f4 -> /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/auto-601560/apiserver.key
	I1217 20:02:44.032703  660659 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/auto-601560/proxy-client.key
	I1217 20:02:44.032725  660659 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/auto-601560/proxy-client.crt with IP's: []
	I1217 20:02:44.117499  660659 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/auto-601560/proxy-client.crt ...
	I1217 20:02:44.117536  660659 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/auto-601560/proxy-client.crt: {Name:mk08364f679f8c12e34b4a6a41dea1c7facafcd7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 20:02:44.117729  660659 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/auto-601560/proxy-client.key ...
	I1217 20:02:44.117748  660659 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/auto-601560/proxy-client.key: {Name:mk11c638c34b9bf51fbf913e4bda9172b3eef8d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 20:02:44.117976  660659 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-372245/.minikube/certs/375797.pem (1338 bytes)
	W1217 20:02:44.118025  660659 certs.go:480] ignoring /home/jenkins/minikube-integration/22186-372245/.minikube/certs/375797_empty.pem, impossibly tiny 0 bytes
	I1217 20:02:44.118039  660659 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-372245/.minikube/certs/ca-key.pem (1675 bytes)
	I1217 20:02:44.118092  660659 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-372245/.minikube/certs/ca.pem (1082 bytes)
	I1217 20:02:44.118135  660659 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-372245/.minikube/certs/cert.pem (1123 bytes)
	I1217 20:02:44.118170  660659 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-372245/.minikube/certs/key.pem (1675 bytes)
	I1217 20:02:44.118227  660659 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-372245/.minikube/files/etc/ssl/certs/3757972.pem (1708 bytes)
	I1217 20:02:44.118891  660659 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1217 20:02:44.140817  660659 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1217 20:02:44.161494  660659 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1217 20:02:44.182356  660659 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1217 20:02:44.202058  660659 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/auto-601560/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1415 bytes)
	I1217 20:02:44.220529  660659 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/auto-601560/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1217 20:02:44.239782  660659 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/auto-601560/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1217 20:02:44.258573  660659 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/auto-601560/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1217 20:02:44.283042  660659 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/files/etc/ssl/certs/3757972.pem --> /usr/share/ca-certificates/3757972.pem (1708 bytes)
	I1217 20:02:44.302241  660659 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 20:02:44.320437  660659 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/certs/375797.pem --> /usr/share/ca-certificates/375797.pem (1338 bytes)
	I1217 20:02:44.340559  660659 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1217 20:02:44.358336  660659 ssh_runner.go:195] Run: openssl version
	I1217 20:02:44.365258  660659 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/375797.pem
	I1217 20:02:44.373190  660659 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/375797.pem /etc/ssl/certs/375797.pem
	I1217 20:02:44.381397  660659 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/375797.pem
	I1217 20:02:44.385961  660659 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 17 19:32 /usr/share/ca-certificates/375797.pem
	I1217 20:02:44.386026  660659 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/375797.pem
	I1217 20:02:44.439602  660659 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1217 20:02:44.450064  660659 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/375797.pem /etc/ssl/certs/51391683.0
	I1217 20:02:44.458680  660659 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3757972.pem
	I1217 20:02:44.467017  660659 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3757972.pem /etc/ssl/certs/3757972.pem
	I1217 20:02:44.475609  660659 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3757972.pem
	I1217 20:02:44.479786  660659 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 17 19:32 /usr/share/ca-certificates/3757972.pem
	I1217 20:02:44.479841  660659 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3757972.pem
	I1217 20:02:44.517019  660659 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1217 20:02:44.525270  660659 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/3757972.pem /etc/ssl/certs/3ec20f2e.0
	I1217 20:02:44.533566  660659 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:02:44.541505  660659 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1217 20:02:44.549366  660659 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:02:44.553481  660659 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 19:24 /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:02:44.553546  660659 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:02:44.587542  660659 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1217 20:02:44.596369  660659 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1217 20:02:44.604210  660659 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1217 20:02:44.608366  660659 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1217 20:02:44.608435  660659 kubeadm.go:401] StartCluster: {Name:auto-601560 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:auto-601560 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetCl
ientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 20:02:44.608522  660659 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1217 20:02:44.608576  660659 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 20:02:44.637881  660659 cri.go:89] found id: ""
	I1217 20:02:44.637961  660659 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1217 20:02:44.646164  660659 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1217 20:02:44.654385  660659 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1217 20:02:44.654446  660659 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1217 20:02:44.663421  660659 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1217 20:02:44.663438  660659 kubeadm.go:158] found existing configuration files:
	
	I1217 20:02:44.663488  660659 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1217 20:02:44.671413  660659 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1217 20:02:44.671483  660659 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1217 20:02:44.679480  660659 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1217 20:02:44.687430  660659 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1217 20:02:44.687495  660659 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1217 20:02:44.695511  660659 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1217 20:02:44.704982  660659 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1217 20:02:44.705053  660659 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1217 20:02:44.713472  660659 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1217 20:02:44.722499  660659 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1217 20:02:44.722567  660659 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1217 20:02:44.731266  660659 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1217 20:02:44.769238  660659 kubeadm.go:319] [init] Using Kubernetes version: v1.34.3
	I1217 20:02:44.769313  660659 kubeadm.go:319] [preflight] Running pre-flight checks
	I1217 20:02:44.789783  660659 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1217 20:02:44.789905  660659 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1045-gcp
	I1217 20:02:44.789954  660659 kubeadm.go:319] OS: Linux
	I1217 20:02:44.790015  660659 kubeadm.go:319] CGROUPS_CPU: enabled
	I1217 20:02:44.790073  660659 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1217 20:02:44.790159  660659 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1217 20:02:44.790224  660659 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1217 20:02:44.790283  660659 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1217 20:02:44.790351  660659 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1217 20:02:44.790414  660659 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1217 20:02:44.790468  660659 kubeadm.go:319] CGROUPS_IO: enabled
	I1217 20:02:44.853314  660659 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1217 20:02:44.853449  660659 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1217 20:02:44.853610  660659 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1217 20:02:44.862326  660659 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1217 20:02:43.611400  661899 cli_runner.go:164] Run: docker container inspect kindnet-601560 --format={{.State.Running}}
	I1217 20:02:43.634670  661899 cli_runner.go:164] Run: docker container inspect kindnet-601560 --format={{.State.Status}}
	I1217 20:02:43.658419  661899 cli_runner.go:164] Run: docker exec kindnet-601560 stat /var/lib/dpkg/alternatives/iptables
	I1217 20:02:43.722281  661899 oci.go:144] the created container "kindnet-601560" has a running status.
	I1217 20:02:43.722316  661899 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22186-372245/.minikube/machines/kindnet-601560/id_rsa...
	I1217 20:02:43.760254  661899 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22186-372245/.minikube/machines/kindnet-601560/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1217 20:02:43.796812  661899 cli_runner.go:164] Run: docker container inspect kindnet-601560 --format={{.State.Status}}
	I1217 20:02:43.822805  661899 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1217 20:02:43.822836  661899 kic_runner.go:114] Args: [docker exec --privileged kindnet-601560 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1217 20:02:43.872271  661899 cli_runner.go:164] Run: docker container inspect kindnet-601560 --format={{.State.Status}}
	I1217 20:02:43.897400  661899 machine.go:94] provisionDockerMachine start ...
	I1217 20:02:43.897498  661899 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-601560
	I1217 20:02:43.928589  661899 main.go:143] libmachine: Using SSH client type: native
	I1217 20:02:43.929635  661899 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33483 <nil> <nil>}
	I1217 20:02:43.929667  661899 main.go:143] libmachine: About to run SSH command:
	hostname
	I1217 20:02:43.930479  661899 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:53518->127.0.0.1:33483: read: connection reset by peer
	I1217 20:02:47.080384  661899 main.go:143] libmachine: SSH cmd err, output: <nil>: kindnet-601560
	
	I1217 20:02:47.080417  661899 ubuntu.go:182] provisioning hostname "kindnet-601560"
	I1217 20:02:47.080481  661899 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-601560
	I1217 20:02:47.099092  661899 main.go:143] libmachine: Using SSH client type: native
	I1217 20:02:47.099335  661899 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33483 <nil> <nil>}
	I1217 20:02:47.099348  661899 main.go:143] libmachine: About to run SSH command:
	sudo hostname kindnet-601560 && echo "kindnet-601560" | sudo tee /etc/hostname
	I1217 20:02:47.258395  661899 main.go:143] libmachine: SSH cmd err, output: <nil>: kindnet-601560
	
	I1217 20:02:47.258499  661899 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-601560
	I1217 20:02:47.278943  661899 main.go:143] libmachine: Using SSH client type: native
	I1217 20:02:47.279227  661899 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33483 <nil> <nil>}
	I1217 20:02:47.279253  661899 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skindnet-601560' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kindnet-601560/g' /etc/hosts;
				else 
					echo '127.0.1.1 kindnet-601560' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1217 20:02:47.428844  661899 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1217 20:02:47.428891  661899 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22186-372245/.minikube CaCertPath:/home/jenkins/minikube-integration/22186-372245/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22186-372245/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22186-372245/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22186-372245/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22186-372245/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22186-372245/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22186-372245/.minikube}
	I1217 20:02:47.428936  661899 ubuntu.go:190] setting up certificates
	I1217 20:02:47.428955  661899 provision.go:84] configureAuth start
	I1217 20:02:47.429032  661899 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kindnet-601560
	I1217 20:02:47.451573  661899 provision.go:143] copyHostCerts
	I1217 20:02:47.451638  661899 exec_runner.go:144] found /home/jenkins/minikube-integration/22186-372245/.minikube/ca.pem, removing ...
	I1217 20:02:47.451651  661899 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22186-372245/.minikube/ca.pem
	I1217 20:02:47.451721  661899 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22186-372245/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22186-372245/.minikube/ca.pem (1082 bytes)
	I1217 20:02:47.451831  661899 exec_runner.go:144] found /home/jenkins/minikube-integration/22186-372245/.minikube/cert.pem, removing ...
	I1217 20:02:47.451841  661899 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22186-372245/.minikube/cert.pem
	I1217 20:02:47.451872  661899 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22186-372245/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22186-372245/.minikube/cert.pem (1123 bytes)
	I1217 20:02:47.451939  661899 exec_runner.go:144] found /home/jenkins/minikube-integration/22186-372245/.minikube/key.pem, removing ...
	I1217 20:02:47.451948  661899 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22186-372245/.minikube/key.pem
	I1217 20:02:47.451971  661899 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22186-372245/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22186-372245/.minikube/key.pem (1675 bytes)
	I1217 20:02:47.452026  661899 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22186-372245/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22186-372245/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22186-372245/.minikube/certs/ca-key.pem org=jenkins.kindnet-601560 san=[127.0.0.1 192.168.103.2 kindnet-601560 localhost minikube]
	I1217 20:02:47.563326  661899 provision.go:177] copyRemoteCerts
	I1217 20:02:47.563393  661899 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1217 20:02:47.563451  661899 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-601560
	I1217 20:02:47.582181  661899 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33483 SSHKeyPath:/home/jenkins/minikube-integration/22186-372245/.minikube/machines/kindnet-601560/id_rsa Username:docker}
	I1217 20:02:47.684908  661899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1217 20:02:47.709099  661899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/machines/server.pem --> /etc/docker/server.pem (1212 bytes)
	I1217 20:02:47.731457  661899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1217 20:02:47.750120  661899 provision.go:87] duration metric: took 321.144631ms to configureAuth
	I1217 20:02:47.750151  661899 ubuntu.go:206] setting minikube options for container-runtime
	I1217 20:02:47.750367  661899 config.go:182] Loaded profile config "kindnet-601560": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 20:02:47.750489  661899 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-601560
	I1217 20:02:47.770762  661899 main.go:143] libmachine: Using SSH client type: native
	I1217 20:02:47.770982  661899 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33483 <nil> <nil>}
	I1217 20:02:47.770998  661899 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1217 20:02:48.061288  661899 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1217 20:02:48.061319  661899 machine.go:97] duration metric: took 4.163895838s to provisionDockerMachine
	I1217 20:02:48.061333  661899 client.go:176] duration metric: took 9.328604528s to LocalClient.Create
	I1217 20:02:48.061362  661899 start.go:167] duration metric: took 9.328675971s to libmachine.API.Create "kindnet-601560"
	I1217 20:02:48.061378  661899 start.go:293] postStartSetup for "kindnet-601560" (driver="docker")
	I1217 20:02:48.061394  661899 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1217 20:02:48.061469  661899 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1217 20:02:48.061525  661899 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-601560
	I1217 20:02:48.082640  661899 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33483 SSHKeyPath:/home/jenkins/minikube-integration/22186-372245/.minikube/machines/kindnet-601560/id_rsa Username:docker}
	I1217 20:02:48.188417  661899 ssh_runner.go:195] Run: cat /etc/os-release
	I1217 20:02:48.192250  661899 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1217 20:02:48.192284  661899 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1217 20:02:48.192299  661899 filesync.go:126] Scanning /home/jenkins/minikube-integration/22186-372245/.minikube/addons for local assets ...
	I1217 20:02:48.192352  661899 filesync.go:126] Scanning /home/jenkins/minikube-integration/22186-372245/.minikube/files for local assets ...
	I1217 20:02:48.192420  661899 filesync.go:149] local asset: /home/jenkins/minikube-integration/22186-372245/.minikube/files/etc/ssl/certs/3757972.pem -> 3757972.pem in /etc/ssl/certs
	I1217 20:02:48.192509  661899 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1217 20:02:48.201128  661899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/files/etc/ssl/certs/3757972.pem --> /etc/ssl/certs/3757972.pem (1708 bytes)
	I1217 20:02:48.223483  661899 start.go:296] duration metric: took 162.083422ms for postStartSetup
	I1217 20:02:48.223865  661899 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kindnet-601560
	I1217 20:02:48.245051  661899 profile.go:143] Saving config to /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/kindnet-601560/config.json ...
	I1217 20:02:48.245373  661899 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 20:02:48.245420  661899 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-601560
	I1217 20:02:48.265186  661899 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33483 SSHKeyPath:/home/jenkins/minikube-integration/22186-372245/.minikube/machines/kindnet-601560/id_rsa Username:docker}
	I1217 20:02:48.366213  661899 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1217 20:02:48.371154  661899 start.go:128] duration metric: took 9.641617458s to createHost
	I1217 20:02:48.371184  661899 start.go:83] releasing machines lock for "kindnet-601560", held for 9.64176076s
	I1217 20:02:48.371269  661899 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kindnet-601560
	I1217 20:02:48.391475  661899 ssh_runner.go:195] Run: cat /version.json
	I1217 20:02:48.391546  661899 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1217 20:02:48.391557  661899 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-601560
	I1217 20:02:48.391636  661899 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-601560
	I1217 20:02:48.413608  661899 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33483 SSHKeyPath:/home/jenkins/minikube-integration/22186-372245/.minikube/machines/kindnet-601560/id_rsa Username:docker}
	I1217 20:02:48.413793  661899 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33483 SSHKeyPath:/home/jenkins/minikube-integration/22186-372245/.minikube/machines/kindnet-601560/id_rsa Username:docker}
	I1217 20:02:44.864701  660659 out.go:252]   - Generating certificates and keys ...
	I1217 20:02:44.864802  660659 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1217 20:02:44.864879  660659 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1217 20:02:45.136018  660659 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1217 20:02:45.319141  660659 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1217 20:02:45.467768  660659 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1217 20:02:46.336397  660659 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1217 20:02:46.414848  660659 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1217 20:02:46.415070  660659 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [auto-601560 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1217 20:02:46.645327  660659 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1217 20:02:46.645505  660659 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [auto-601560 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1217 20:02:46.972638  660659 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1217 20:02:47.223477  660659 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1217 20:02:47.375383  660659 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1217 20:02:47.375483  660659 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1217 20:02:47.823305  660659 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1217 20:02:47.930913  660659 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1217 20:02:48.096526  660659 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1217 20:02:48.275879  660659 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1217 20:02:48.494413  660659 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1217 20:02:48.494978  660659 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1217 20:02:48.500012  660659 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1217 20:02:43.864599  663785 out.go:252] * Restarting existing docker container for "embed-certs-147021" ...
	I1217 20:02:43.864710  663785 cli_runner.go:164] Run: docker start embed-certs-147021
	I1217 20:02:44.132669  663785 cli_runner.go:164] Run: docker container inspect embed-certs-147021 --format={{.State.Status}}
	I1217 20:02:44.154240  663785 kic.go:430] container "embed-certs-147021" state is running.
	I1217 20:02:44.154805  663785 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-147021
	I1217 20:02:44.178115  663785 profile.go:143] Saving config to /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/embed-certs-147021/config.json ...
	I1217 20:02:44.178408  663785 machine.go:94] provisionDockerMachine start ...
	I1217 20:02:44.178513  663785 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-147021
	I1217 20:02:44.198136  663785 main.go:143] libmachine: Using SSH client type: native
	I1217 20:02:44.198394  663785 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33488 <nil> <nil>}
	I1217 20:02:44.198407  663785 main.go:143] libmachine: About to run SSH command:
	hostname
	I1217 20:02:44.198898  663785 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:48296->127.0.0.1:33488: read: connection reset by peer
	I1217 20:02:47.348304  663785 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-147021
	
	I1217 20:02:47.348337  663785 ubuntu.go:182] provisioning hostname "embed-certs-147021"
	I1217 20:02:47.348419  663785 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-147021
	I1217 20:02:47.366963  663785 main.go:143] libmachine: Using SSH client type: native
	I1217 20:02:47.367192  663785 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33488 <nil> <nil>}
	I1217 20:02:47.367209  663785 main.go:143] libmachine: About to run SSH command:
	sudo hostname embed-certs-147021 && echo "embed-certs-147021" | sudo tee /etc/hostname
	I1217 20:02:47.527178  663785 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-147021
	
	I1217 20:02:47.527279  663785 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-147021
	I1217 20:02:47.547145  663785 main.go:143] libmachine: Using SSH client type: native
	I1217 20:02:47.547394  663785 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33488 <nil> <nil>}
	I1217 20:02:47.547420  663785 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-147021' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-147021/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-147021' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1217 20:02:47.694326  663785 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1217 20:02:47.694359  663785 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22186-372245/.minikube CaCertPath:/home/jenkins/minikube-integration/22186-372245/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22186-372245/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22186-372245/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22186-372245/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22186-372245/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22186-372245/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22186-372245/.minikube}
	I1217 20:02:47.694415  663785 ubuntu.go:190] setting up certificates
	I1217 20:02:47.694429  663785 provision.go:84] configureAuth start
	I1217 20:02:47.694487  663785 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-147021
	I1217 20:02:47.718735  663785 provision.go:143] copyHostCerts
	I1217 20:02:47.718817  663785 exec_runner.go:144] found /home/jenkins/minikube-integration/22186-372245/.minikube/ca.pem, removing ...
	I1217 20:02:47.718840  663785 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22186-372245/.minikube/ca.pem
	I1217 20:02:47.718908  663785 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22186-372245/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22186-372245/.minikube/ca.pem (1082 bytes)
	I1217 20:02:47.719038  663785 exec_runner.go:144] found /home/jenkins/minikube-integration/22186-372245/.minikube/cert.pem, removing ...
	I1217 20:02:47.719049  663785 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22186-372245/.minikube/cert.pem
	I1217 20:02:47.719109  663785 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22186-372245/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22186-372245/.minikube/cert.pem (1123 bytes)
	I1217 20:02:47.719218  663785 exec_runner.go:144] found /home/jenkins/minikube-integration/22186-372245/.minikube/key.pem, removing ...
	I1217 20:02:47.719229  663785 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22186-372245/.minikube/key.pem
	I1217 20:02:47.719256  663785 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22186-372245/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22186-372245/.minikube/key.pem (1675 bytes)
	I1217 20:02:47.719335  663785 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22186-372245/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22186-372245/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22186-372245/.minikube/certs/ca-key.pem org=jenkins.embed-certs-147021 san=[127.0.0.1 192.168.85.2 embed-certs-147021 localhost minikube]
	I1217 20:02:47.856517  663785 provision.go:177] copyRemoteCerts
	I1217 20:02:47.856586  663785 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1217 20:02:47.856629  663785 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-147021
	I1217 20:02:47.877532  663785 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33488 SSHKeyPath:/home/jenkins/minikube-integration/22186-372245/.minikube/machines/embed-certs-147021/id_rsa Username:docker}
	I1217 20:02:47.982215  663785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1217 20:02:48.001798  663785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1217 20:02:48.021223  663785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1217 20:02:48.040641  663785 provision.go:87] duration metric: took 346.194733ms to configureAuth
	I1217 20:02:48.040674  663785 ubuntu.go:206] setting minikube options for container-runtime
	I1217 20:02:48.040880  663785 config.go:182] Loaded profile config "embed-certs-147021": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 20:02:48.041029  663785 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-147021
	I1217 20:02:48.060669  663785 main.go:143] libmachine: Using SSH client type: native
	I1217 20:02:48.061027  663785 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33488 <nil> <nil>}
	I1217 20:02:48.061056  663785 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1217 20:02:48.431341  663785 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1217 20:02:48.431371  663785 machine.go:97] duration metric: took 4.252940597s to provisionDockerMachine
	I1217 20:02:48.431386  663785 start.go:293] postStartSetup for "embed-certs-147021" (driver="docker")
	I1217 20:02:48.431400  663785 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1217 20:02:48.431476  663785 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1217 20:02:48.431534  663785 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-147021
	I1217 20:02:48.456476  663785 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33488 SSHKeyPath:/home/jenkins/minikube-integration/22186-372245/.minikube/machines/embed-certs-147021/id_rsa Username:docker}
	I1217 20:02:48.564101  663785 ssh_runner.go:195] Run: cat /etc/os-release
	I1217 20:02:48.567966  663785 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1217 20:02:48.567999  663785 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1217 20:02:48.568014  663785 filesync.go:126] Scanning /home/jenkins/minikube-integration/22186-372245/.minikube/addons for local assets ...
	I1217 20:02:48.568092  663785 filesync.go:126] Scanning /home/jenkins/minikube-integration/22186-372245/.minikube/files for local assets ...
	I1217 20:02:48.568209  663785 filesync.go:149] local asset: /home/jenkins/minikube-integration/22186-372245/.minikube/files/etc/ssl/certs/3757972.pem -> 3757972.pem in /etc/ssl/certs
	I1217 20:02:48.568362  663785 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1217 20:02:48.576561  663785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/files/etc/ssl/certs/3757972.pem --> /etc/ssl/certs/3757972.pem (1708 bytes)
	I1217 20:02:48.595217  663785 start.go:296] duration metric: took 163.814903ms for postStartSetup
	I1217 20:02:48.595292  663785 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 20:02:48.595339  663785 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-147021
	I1217 20:02:48.501502  660659 out.go:252]   - Booting up control plane ...
	I1217 20:02:48.501656  660659 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1217 20:02:48.501776  660659 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1217 20:02:48.502348  660659 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1217 20:02:48.516278  660659 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1217 20:02:48.516435  660659 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1217 20:02:48.523781  660659 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1217 20:02:48.523985  660659 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1217 20:02:48.524057  660659 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1217 20:02:48.624366  660659 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1217 20:02:48.624548  660659 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1217 20:02:48.579398  661899 ssh_runner.go:195] Run: systemctl --version
	I1217 20:02:48.586164  661899 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1217 20:02:48.626011  661899 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1217 20:02:48.631109  661899 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1217 20:02:48.631187  661899 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1217 20:02:48.661358  661899 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1217 20:02:48.661381  661899 start.go:496] detecting cgroup driver to use...
	I1217 20:02:48.661414  661899 detect.go:190] detected "systemd" cgroup driver on host os
	I1217 20:02:48.661466  661899 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1217 20:02:48.679375  661899 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1217 20:02:48.692070  661899 docker.go:218] disabling cri-docker service (if available) ...
	I1217 20:02:48.692146  661899 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1217 20:02:48.708630  661899 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1217 20:02:48.729969  661899 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1217 20:02:48.829550  661899 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1217 20:02:48.924106  661899 docker.go:234] disabling docker service ...
	I1217 20:02:48.924201  661899 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1217 20:02:48.947385  661899 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1217 20:02:48.961958  661899 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1217 20:02:49.066770  661899 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1217 20:02:49.161061  661899 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1217 20:02:49.174468  661899 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1217 20:02:49.189683  661899 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1217 20:02:49.189752  661899 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:02:49.208286  661899 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1217 20:02:49.208443  661899 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:02:49.218617  661899 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:02:49.228519  661899 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:02:49.238280  661899 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1217 20:02:49.247419  661899 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:02:49.257212  661899 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:02:49.271455  661899 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:02:49.280614  661899 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1217 20:02:49.288252  661899 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1217 20:02:49.295603  661899 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 20:02:49.382931  661899 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1217 20:02:49.548787  661899 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1217 20:02:49.548860  661899 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1217 20:02:49.553975  661899 start.go:564] Will wait 60s for crictl version
	I1217 20:02:49.554042  661899 ssh_runner.go:195] Run: which crictl
	I1217 20:02:49.558923  661899 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1217 20:02:49.592357  661899 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1217 20:02:49.592500  661899 ssh_runner.go:195] Run: crio --version
	I1217 20:02:49.632429  661899 ssh_runner.go:195] Run: crio --version
	I1217 20:02:49.684016  661899 out.go:179] * Preparing Kubernetes v1.34.3 on CRI-O 1.34.3 ...
	I1217 20:02:48.615643  663785 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33488 SSHKeyPath:/home/jenkins/minikube-integration/22186-372245/.minikube/machines/embed-certs-147021/id_rsa Username:docker}
	I1217 20:02:48.717361  663785 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1217 20:02:48.722493  663785 fix.go:56] duration metric: took 4.881179695s for fixHost
	I1217 20:02:48.722525  663785 start.go:83] releasing machines lock for "embed-certs-147021", held for 4.881249387s
	I1217 20:02:48.722604  663785 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-147021
	I1217 20:02:48.743848  663785 ssh_runner.go:195] Run: cat /version.json
	I1217 20:02:48.743901  663785 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1217 20:02:48.743917  663785 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-147021
	I1217 20:02:48.743964  663785 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-147021
	I1217 20:02:48.769219  663785 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33488 SSHKeyPath:/home/jenkins/minikube-integration/22186-372245/.minikube/machines/embed-certs-147021/id_rsa Username:docker}
	I1217 20:02:48.773047  663785 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33488 SSHKeyPath:/home/jenkins/minikube-integration/22186-372245/.minikube/machines/embed-certs-147021/id_rsa Username:docker}
	I1217 20:02:48.932670  663785 ssh_runner.go:195] Run: systemctl --version
	I1217 20:02:48.939847  663785 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1217 20:02:48.979562  663785 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1217 20:02:48.984792  663785 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1217 20:02:48.984868  663785 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1217 20:02:48.995530  663785 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1217 20:02:48.995563  663785 start.go:496] detecting cgroup driver to use...
	I1217 20:02:48.995597  663785 detect.go:190] detected "systemd" cgroup driver on host os
	I1217 20:02:48.995644  663785 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1217 20:02:49.017579  663785 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1217 20:02:49.036716  663785 docker.go:218] disabling cri-docker service (if available) ...
	I1217 20:02:49.036796  663785 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1217 20:02:49.052759  663785 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1217 20:02:49.066858  663785 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1217 20:02:49.167570  663785 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1217 20:02:49.252733  663785 docker.go:234] disabling docker service ...
	I1217 20:02:49.252800  663785 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1217 20:02:49.267268  663785 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1217 20:02:49.281274  663785 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1217 20:02:49.370439  663785 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1217 20:02:49.457423  663785 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1217 20:02:49.473854  663785 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1217 20:02:49.492403  663785 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1217 20:02:49.492468  663785 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:02:49.505388  663785 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1217 20:02:49.505468  663785 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:02:49.516670  663785 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:02:49.527052  663785 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:02:49.539723  663785 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1217 20:02:49.552332  663785 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:02:49.564669  663785 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:02:49.577556  663785 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:02:49.592301  663785 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1217 20:02:49.603509  663785 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1217 20:02:49.615170  663785 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 20:02:49.721174  663785 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1217 20:02:49.890529  663785 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1217 20:02:49.890598  663785 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1217 20:02:49.895802  663785 start.go:564] Will wait 60s for crictl version
	I1217 20:02:49.895867  663785 ssh_runner.go:195] Run: which crictl
	I1217 20:02:49.900739  663785 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1217 20:02:49.933971  663785 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1217 20:02:49.934061  663785 ssh_runner.go:195] Run: crio --version
	I1217 20:02:49.969309  663785 ssh_runner.go:195] Run: crio --version
	I1217 20:02:50.024177  663785 out.go:179] * Preparing Kubernetes v1.34.3 on CRI-O 1.34.3 ...
	I1217 20:02:50.025394  663785 cli_runner.go:164] Run: docker network inspect embed-certs-147021 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1217 20:02:50.046888  663785 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1217 20:02:50.052845  663785 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 20:02:50.067008  663785 kubeadm.go:884] updating cluster {Name:embed-certs-147021 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:embed-certs-147021 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1217 20:02:50.067307  663785 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1217 20:02:50.067376  663785 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 20:02:50.107911  663785 crio.go:514] all images are preloaded for cri-o runtime.
	I1217 20:02:50.107936  663785 crio.go:433] Images already preloaded, skipping extraction
	I1217 20:02:50.108004  663785 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 20:02:50.140588  663785 crio.go:514] all images are preloaded for cri-o runtime.
	I1217 20:02:50.140613  663785 cache_images.go:86] Images are preloaded, skipping loading
	I1217 20:02:50.140624  663785 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.3 crio true true} ...
	I1217 20:02:50.140746  663785 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-147021 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.3 ClusterName:embed-certs-147021 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1217 20:02:50.140832  663785 ssh_runner.go:195] Run: crio config
	I1217 20:02:50.204906  663785 cni.go:84] Creating CNI manager for ""
	I1217 20:02:50.204929  663785 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1217 20:02:50.204946  663785 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1217 20:02:50.204969  663785 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-147021 NodeName:embed-certs-147021 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1217 20:02:50.205122  663785 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-147021"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1217 20:02:50.205197  663785 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.3
	I1217 20:02:50.214324  663785 binaries.go:51] Found k8s binaries, skipping transfer
	I1217 20:02:50.214403  663785 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1217 20:02:50.225656  663785 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1217 20:02:50.242737  663785 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1217 20:02:50.260770  663785 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1217 20:02:50.278188  663785 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1217 20:02:50.282910  663785 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 20:02:50.295374  663785 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 20:02:50.407365  663785 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 20:02:50.431348  663785 certs.go:69] Setting up /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/embed-certs-147021 for IP: 192.168.85.2
	I1217 20:02:50.431371  663785 certs.go:195] generating shared ca certs ...
	I1217 20:02:50.431394  663785 certs.go:227] acquiring lock for ca certs: {Name:mk6c0a4a99609de13fb0b54aca94f9165cc7856c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 20:02:50.431579  663785 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22186-372245/.minikube/ca.key
	I1217 20:02:50.431645  663785 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22186-372245/.minikube/proxy-client-ca.key
	I1217 20:02:50.431657  663785 certs.go:257] generating profile certs ...
	I1217 20:02:50.431781  663785 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/embed-certs-147021/client.key
	I1217 20:02:50.431862  663785 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/embed-certs-147021/apiserver.key.45939a3a
	I1217 20:02:50.431911  663785 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/embed-certs-147021/proxy-client.key
	I1217 20:02:50.432056  663785 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-372245/.minikube/certs/375797.pem (1338 bytes)
	W1217 20:02:50.432118  663785 certs.go:480] ignoring /home/jenkins/minikube-integration/22186-372245/.minikube/certs/375797_empty.pem, impossibly tiny 0 bytes
	I1217 20:02:50.432129  663785 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-372245/.minikube/certs/ca-key.pem (1675 bytes)
	I1217 20:02:50.432166  663785 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-372245/.minikube/certs/ca.pem (1082 bytes)
	I1217 20:02:50.432208  663785 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-372245/.minikube/certs/cert.pem (1123 bytes)
	I1217 20:02:50.432242  663785 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-372245/.minikube/certs/key.pem (1675 bytes)
	I1217 20:02:50.432309  663785 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-372245/.minikube/files/etc/ssl/certs/3757972.pem (1708 bytes)
	I1217 20:02:50.433284  663785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1217 20:02:50.463769  663785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1217 20:02:50.488659  663785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1217 20:02:50.513334  663785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1217 20:02:50.542448  663785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/embed-certs-147021/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1217 20:02:50.575166  663785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/embed-certs-147021/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1217 20:02:50.600582  663785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/embed-certs-147021/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1217 20:02:50.627050  663785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/embed-certs-147021/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1217 20:02:50.656451  663785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/files/etc/ssl/certs/3757972.pem --> /usr/share/ca-certificates/3757972.pem (1708 bytes)
	I1217 20:02:50.683235  663785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 20:02:50.707615  663785 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/certs/375797.pem --> /usr/share/ca-certificates/375797.pem (1338 bytes)
	I1217 20:02:50.731374  663785 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1217 20:02:50.748832  663785 ssh_runner.go:195] Run: openssl version
	I1217 20:02:50.757380  663785 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/375797.pem
	I1217 20:02:50.767216  663785 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/375797.pem /etc/ssl/certs/375797.pem
	I1217 20:02:50.779323  663785 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/375797.pem
	I1217 20:02:50.784796  663785 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 17 19:32 /usr/share/ca-certificates/375797.pem
	I1217 20:02:50.784865  663785 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/375797.pem
	I1217 20:02:50.844694  663785 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1217 20:02:50.861389  663785 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3757972.pem
	I1217 20:02:50.872989  663785 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3757972.pem /etc/ssl/certs/3757972.pem
	I1217 20:02:50.884269  663785 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3757972.pem
	I1217 20:02:50.889153  663785 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 17 19:32 /usr/share/ca-certificates/3757972.pem
	I1217 20:02:50.889217  663785 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3757972.pem
	I1217 20:02:50.932104  663785 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1217 20:02:50.942603  663785 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:02:50.954742  663785 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1217 20:02:50.968858  663785 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:02:50.976199  663785 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 19:24 /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:02:50.976262  663785 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:02:51.040710  663785 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1217 20:02:51.052332  663785 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1217 20:02:51.057501  663785 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1217 20:02:51.116321  663785 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1217 20:02:51.190454  663785 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1217 20:02:51.250684  663785 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1217 20:02:51.309302  663785 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1217 20:02:51.360264  663785 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1217 20:02:51.416579  663785 kubeadm.go:401] StartCluster: {Name:embed-certs-147021 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:embed-certs-147021 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 20:02:51.416695  663785 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1217 20:02:51.416774  663785 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 20:02:51.478151  663785 cri.go:89] found id: "908edcd5f5289ef7311867639a5128a59a15dad0583e878557accbf26efa79fb"
	I1217 20:02:51.478197  663785 cri.go:89] found id: "9609c0cfa32a680d1b01f25906eb3fc99966c8e66cc7b424a4aaf43f25353e40"
	I1217 20:02:51.478203  663785 cri.go:89] found id: "65e71064f45025b16a8eeb57a2312f4a95a800aca4e77340fff8eb1b3e67c18d"
	I1217 20:02:51.478208  663785 cri.go:89] found id: "d703ea40f171a6defb08dbaa7f51e4cb839d82c4c6df2ff17c3ac6931834a231"
	I1217 20:02:51.478222  663785 cri.go:89] found id: ""
	I1217 20:02:51.478276  663785 ssh_runner.go:195] Run: sudo runc list -f json
	W1217 20:02:51.508331  663785 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T20:02:51Z" level=error msg="open /run/runc: no such file or directory"
	I1217 20:02:51.508425  663785 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1217 20:02:51.526578  663785 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1217 20:02:51.526603  663785 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1217 20:02:51.526653  663785 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1217 20:02:51.535937  663785 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1217 20:02:51.536655  663785 kubeconfig.go:47] verify endpoint returned: get endpoint: "embed-certs-147021" does not appear in /home/jenkins/minikube-integration/22186-372245/kubeconfig
	I1217 20:02:51.536957  663785 kubeconfig.go:62] /home/jenkins/minikube-integration/22186-372245/kubeconfig needs updating (will repair): [kubeconfig missing "embed-certs-147021" cluster setting kubeconfig missing "embed-certs-147021" context setting]
	I1217 20:02:51.537678  663785 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-372245/kubeconfig: {Name:mkbe8926b9014d2af611aee93b1188b72880b6c1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 20:02:51.539655  663785 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1217 20:02:51.551164  663785 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1217 20:02:51.551204  663785 kubeadm.go:602] duration metric: took 24.594853ms to restartPrimaryControlPlane
	I1217 20:02:51.551216  663785 kubeadm.go:403] duration metric: took 134.651056ms to StartCluster
	I1217 20:02:51.551242  663785 settings.go:142] acquiring lock: {Name:mk01c60672ff2b8f50b037d6096a0a4590636830 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 20:02:51.551320  663785 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22186-372245/kubeconfig
	I1217 20:02:51.552909  663785 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-372245/kubeconfig: {Name:mkbe8926b9014d2af611aee93b1188b72880b6c1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 20:02:51.553351  663785 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1217 20:02:51.553626  663785 config.go:182] Loaded profile config "embed-certs-147021": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 20:02:51.553706  663785 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1217 20:02:51.553805  663785 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-147021"
	I1217 20:02:51.553827  663785 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-147021"
	W1217 20:02:51.553838  663785 addons.go:248] addon storage-provisioner should already be in state true
	I1217 20:02:51.553863  663785 addons.go:70] Setting dashboard=true in profile "embed-certs-147021"
	I1217 20:02:51.553882  663785 host.go:66] Checking if "embed-certs-147021" exists ...
	I1217 20:02:51.553907  663785 addons.go:239] Setting addon dashboard=true in "embed-certs-147021"
	W1217 20:02:51.553920  663785 addons.go:248] addon dashboard should already be in state true
	I1217 20:02:51.553958  663785 host.go:66] Checking if "embed-certs-147021" exists ...
	I1217 20:02:51.554493  663785 cli_runner.go:164] Run: docker container inspect embed-certs-147021 --format={{.State.Status}}
	I1217 20:02:51.554516  663785 cli_runner.go:164] Run: docker container inspect embed-certs-147021 --format={{.State.Status}}
	I1217 20:02:51.554716  663785 addons.go:70] Setting default-storageclass=true in profile "embed-certs-147021"
	I1217 20:02:51.554738  663785 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-147021"
	I1217 20:02:51.555021  663785 cli_runner.go:164] Run: docker container inspect embed-certs-147021 --format={{.State.Status}}
	I1217 20:02:51.555916  663785 out.go:179] * Verifying Kubernetes components...
	I1217 20:02:51.557128  663785 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 20:02:51.592216  663785 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1217 20:02:51.594244  663785 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 20:02:51.594276  663785 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1217 20:02:51.594350  663785 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-147021
	I1217 20:02:51.594646  663785 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1217 20:02:51.596060  663785 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1217 20:02:49.685304  661899 cli_runner.go:164] Run: docker network inspect kindnet-601560 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1217 20:02:49.704985  661899 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1217 20:02:49.709915  661899 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 20:02:49.721547  661899 kubeadm.go:884] updating cluster {Name:kindnet-601560 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:kindnet-601560 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServer
Names:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePa
th: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1217 20:02:49.721717  661899 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1217 20:02:49.721782  661899 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 20:02:49.758304  661899 crio.go:514] all images are preloaded for cri-o runtime.
	I1217 20:02:49.758330  661899 crio.go:433] Images already preloaded, skipping extraction
	I1217 20:02:49.758385  661899 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 20:02:49.790201  661899 crio.go:514] all images are preloaded for cri-o runtime.
	I1217 20:02:49.790227  661899 cache_images.go:86] Images are preloaded, skipping loading
	I1217 20:02:49.790237  661899 kubeadm.go:935] updating node { 192.168.103.2 8443 v1.34.3 crio true true} ...
	I1217 20:02:49.790343  661899 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=kindnet-601560 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.3 ClusterName:kindnet-601560 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet}
	I1217 20:02:49.790419  661899 ssh_runner.go:195] Run: crio config
	I1217 20:02:49.853038  661899 cni.go:84] Creating CNI manager for "kindnet"
	I1217 20:02:49.853093  661899 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1217 20:02:49.853123  661899 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.34.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kindnet-601560 NodeName:kindnet-601560 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kub
ernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1217 20:02:49.853284  661899 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "kindnet-601560"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1217 20:02:49.853356  661899 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.3
	I1217 20:02:49.863649  661899 binaries.go:51] Found k8s binaries, skipping transfer
	I1217 20:02:49.863714  661899 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1217 20:02:49.872398  661899 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (365 bytes)
	I1217 20:02:49.887241  661899 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1217 20:02:49.905533  661899 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2213 bytes)
	I1217 20:02:49.922815  661899 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1217 20:02:49.927135  661899 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 20:02:49.940862  661899 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 20:02:50.053437  661899 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 20:02:50.079323  661899 certs.go:69] Setting up /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/kindnet-601560 for IP: 192.168.103.2
	I1217 20:02:50.079347  661899 certs.go:195] generating shared ca certs ...
	I1217 20:02:50.079368  661899 certs.go:227] acquiring lock for ca certs: {Name:mk6c0a4a99609de13fb0b54aca94f9165cc7856c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 20:02:50.079533  661899 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22186-372245/.minikube/ca.key
	I1217 20:02:50.079591  661899 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22186-372245/.minikube/proxy-client-ca.key
	I1217 20:02:50.079604  661899 certs.go:257] generating profile certs ...
	I1217 20:02:50.079674  661899 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/kindnet-601560/client.key
	I1217 20:02:50.079691  661899 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/kindnet-601560/client.crt with IP's: []
	I1217 20:02:50.127324  661899 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/kindnet-601560/client.crt ...
	I1217 20:02:50.127359  661899 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/kindnet-601560/client.crt: {Name:mked69a287e12e7b6e8886165202d8cac053de52 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 20:02:50.127576  661899 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/kindnet-601560/client.key ...
	I1217 20:02:50.127587  661899 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/kindnet-601560/client.key: {Name:mk1208f00603053aee8fdb54d644709e1cf3fd77 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 20:02:50.127691  661899 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/kindnet-601560/apiserver.key.a1245fe9
	I1217 20:02:50.127708  661899 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/kindnet-601560/apiserver.crt.a1245fe9 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.103.2]
	I1217 20:02:50.152282  661899 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/kindnet-601560/apiserver.crt.a1245fe9 ...
	I1217 20:02:50.152317  661899 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/kindnet-601560/apiserver.crt.a1245fe9: {Name:mk640a75542df3c1b914c56b6ca96b6c4b85975c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 20:02:50.152510  661899 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/kindnet-601560/apiserver.key.a1245fe9 ...
	I1217 20:02:50.152530  661899 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/kindnet-601560/apiserver.key.a1245fe9: {Name:mke3f095329d88a6c31bb2a355d65602ccdd02cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 20:02:50.152642  661899 certs.go:382] copying /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/kindnet-601560/apiserver.crt.a1245fe9 -> /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/kindnet-601560/apiserver.crt
	I1217 20:02:50.152743  661899 certs.go:386] copying /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/kindnet-601560/apiserver.key.a1245fe9 -> /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/kindnet-601560/apiserver.key
	I1217 20:02:50.152830  661899 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/kindnet-601560/proxy-client.key
	I1217 20:02:50.152857  661899 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/kindnet-601560/proxy-client.crt with IP's: []
	I1217 20:02:50.201241  661899 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/kindnet-601560/proxy-client.crt ...
	I1217 20:02:50.201273  661899 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/kindnet-601560/proxy-client.crt: {Name:mk4eabf7add40b088cfa86c718b2dccfa597a940 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 20:02:50.201512  661899 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/kindnet-601560/proxy-client.key ...
	I1217 20:02:50.201539  661899 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/kindnet-601560/proxy-client.key: {Name:mk556df6edc58ef0c5447fd5ad71c1189aa37eee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 20:02:50.201801  661899 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-372245/.minikube/certs/375797.pem (1338 bytes)
	W1217 20:02:50.201846  661899 certs.go:480] ignoring /home/jenkins/minikube-integration/22186-372245/.minikube/certs/375797_empty.pem, impossibly tiny 0 bytes
	I1217 20:02:50.201857  661899 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-372245/.minikube/certs/ca-key.pem (1675 bytes)
	I1217 20:02:50.201889  661899 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-372245/.minikube/certs/ca.pem (1082 bytes)
	I1217 20:02:50.201932  661899 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-372245/.minikube/certs/cert.pem (1123 bytes)
	I1217 20:02:50.201965  661899 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-372245/.minikube/certs/key.pem (1675 bytes)
	I1217 20:02:50.202020  661899 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-372245/.minikube/files/etc/ssl/certs/3757972.pem (1708 bytes)
	I1217 20:02:50.202844  661899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1217 20:02:50.226717  661899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1217 20:02:50.251643  661899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1217 20:02:50.278437  661899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1217 20:02:50.300603  661899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/kindnet-601560/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1217 20:02:50.323610  661899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/kindnet-601560/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1217 20:02:50.356993  661899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/kindnet-601560/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1217 20:02:50.381194  661899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/kindnet-601560/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1217 20:02:50.405245  661899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/certs/375797.pem --> /usr/share/ca-certificates/375797.pem (1338 bytes)
	I1217 20:02:50.431988  661899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/files/etc/ssl/certs/3757972.pem --> /usr/share/ca-certificates/3757972.pem (1708 bytes)
	I1217 20:02:50.462279  661899 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 20:02:50.487883  661899 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1217 20:02:50.507222  661899 ssh_runner.go:195] Run: openssl version
	I1217 20:02:50.516250  661899 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3757972.pem
	I1217 20:02:50.527440  661899 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3757972.pem /etc/ssl/certs/3757972.pem
	I1217 20:02:50.540054  661899 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3757972.pem
	I1217 20:02:50.545673  661899 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 17 19:32 /usr/share/ca-certificates/3757972.pem
	I1217 20:02:50.545737  661899 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3757972.pem
	I1217 20:02:50.606471  661899 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1217 20:02:50.616914  661899 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/3757972.pem /etc/ssl/certs/3ec20f2e.0
	I1217 20:02:50.627593  661899 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:02:50.642471  661899 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1217 20:02:50.653265  661899 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:02:50.658590  661899 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 19:24 /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:02:50.658659  661899 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:02:50.710062  661899 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1217 20:02:50.720146  661899 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1217 20:02:50.731152  661899 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/375797.pem
	I1217 20:02:50.742736  661899 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/375797.pem /etc/ssl/certs/375797.pem
	I1217 20:02:50.752789  661899 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/375797.pem
	I1217 20:02:50.758028  661899 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 17 19:32 /usr/share/ca-certificates/375797.pem
	I1217 20:02:50.758139  661899 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/375797.pem
	I1217 20:02:50.813579  661899 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1217 20:02:50.823400  661899 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/375797.pem /etc/ssl/certs/51391683.0
	I1217 20:02:50.835946  661899 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1217 20:02:50.841541  661899 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1217 20:02:50.841603  661899 kubeadm.go:401] StartCluster: {Name:kindnet-601560 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:kindnet-601560 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNam
es:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 20:02:50.841694  661899 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1217 20:02:50.842268  661899 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 20:02:50.884186  661899 cri.go:89] found id: ""
	I1217 20:02:50.884273  661899 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1217 20:02:50.895011  661899 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1217 20:02:50.906757  661899 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1217 20:02:50.906914  661899 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1217 20:02:50.916804  661899 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1217 20:02:50.916833  661899 kubeadm.go:158] found existing configuration files:
	
	I1217 20:02:50.916893  661899 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1217 20:02:50.926277  661899 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1217 20:02:50.926354  661899 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1217 20:02:50.935574  661899 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1217 20:02:50.946371  661899 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1217 20:02:50.946442  661899 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1217 20:02:50.960819  661899 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1217 20:02:50.976844  661899 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1217 20:02:50.976900  661899 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1217 20:02:50.988053  661899 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1217 20:02:51.001786  661899 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1217 20:02:51.001919  661899 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1217 20:02:51.016954  661899 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1217 20:02:51.118794  661899 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1045-gcp\n", err: exit status 1
	I1217 20:02:51.234275  661899 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1217 20:02:51.597243  663785 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1217 20:02:51.597435  663785 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1217 20:02:51.597515  663785 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-147021
	I1217 20:02:51.598990  663785 addons.go:239] Setting addon default-storageclass=true in "embed-certs-147021"
	W1217 20:02:51.599020  663785 addons.go:248] addon default-storageclass should already be in state true
	I1217 20:02:51.599049  663785 host.go:66] Checking if "embed-certs-147021" exists ...
	I1217 20:02:51.599561  663785 cli_runner.go:164] Run: docker container inspect embed-certs-147021 --format={{.State.Status}}
	I1217 20:02:51.633833  663785 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1217 20:02:51.633857  663785 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1217 20:02:51.633945  663785 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-147021
	I1217 20:02:51.638311  663785 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33488 SSHKeyPath:/home/jenkins/minikube-integration/22186-372245/.minikube/machines/embed-certs-147021/id_rsa Username:docker}
	I1217 20:02:51.655387  663785 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33488 SSHKeyPath:/home/jenkins/minikube-integration/22186-372245/.minikube/machines/embed-certs-147021/id_rsa Username:docker}
	I1217 20:02:51.682260  663785 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33488 SSHKeyPath:/home/jenkins/minikube-integration/22186-372245/.minikube/machines/embed-certs-147021/id_rsa Username:docker}
	I1217 20:02:51.781215  663785 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 20:02:51.814681  663785 node_ready.go:35] waiting up to 6m0s for node "embed-certs-147021" to be "Ready" ...
	I1217 20:02:51.828341  663785 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1217 20:02:51.828414  663785 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1217 20:02:51.831490  663785 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 20:02:51.845768  663785 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1217 20:02:51.845791  663785 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1217 20:02:51.857549  663785 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1217 20:02:51.864033  663785 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1217 20:02:51.864061  663785 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1217 20:02:51.889960  663785 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1217 20:02:51.889991  663785 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1217 20:02:51.934517  663785 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1217 20:02:51.934542  663785 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1217 20:02:51.966016  663785 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1217 20:02:51.966044  663785 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1217 20:02:51.985059  663785 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1217 20:02:51.985109  663785 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1217 20:02:52.004156  663785 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1217 20:02:52.004185  663785 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1217 20:02:52.026050  663785 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1217 20:02:52.026098  663785 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1217 20:02:52.043062  663785 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1217 20:02:53.240482  663785 node_ready.go:49] node "embed-certs-147021" is "Ready"
	I1217 20:02:53.240536  663785 node_ready.go:38] duration metric: took 1.425810707s for node "embed-certs-147021" to be "Ready" ...
	I1217 20:02:53.240556  663785 api_server.go:52] waiting for apiserver process to appear ...
	I1217 20:02:53.240617  663785 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:02:49.126206  660659 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 502.111378ms
	I1217 20:02:49.130757  660659 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1217 20:02:49.130890  660659 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1217 20:02:49.131038  660659 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1217 20:02:49.131171  660659 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1217 20:02:51.339345  660659 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.208382366s
	I1217 20:02:51.759238  660659 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.628375171s
	I1217 20:02:53.632551  660659 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.501705543s
	I1217 20:02:53.653967  660659 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1217 20:02:53.669344  660659 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1217 20:02:53.684595  660659 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1217 20:02:53.684892  660659 kubeadm.go:319] [mark-control-plane] Marking the node auto-601560 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1217 20:02:53.696814  660659 kubeadm.go:319] [bootstrap-token] Using token: hb8mqj.5kxeg2f4381ik7ew
	I1217 20:02:53.698478  660659 out.go:252]   - Configuring RBAC rules ...
	I1217 20:02:53.698645  660659 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1217 20:02:53.706437  660659 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1217 20:02:53.715064  660659 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1217 20:02:53.717973  660659 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1217 20:02:53.721498  660659 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1217 20:02:53.725206  660659 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1217 20:02:53.907627  663785 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.076101302s)
	I1217 20:02:53.907941  663785 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.050353476s)
	I1217 20:02:53.908519  663785 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.865401392s)
	I1217 20:02:53.908627  663785 api_server.go:72] duration metric: took 2.355227594s to wait for apiserver process to appear ...
	I1217 20:02:53.908662  663785 api_server.go:88] waiting for apiserver healthz status ...
	I1217 20:02:53.908709  663785 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1217 20:02:53.912012  663785 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-147021 addons enable metrics-server
	
	I1217 20:02:53.917501  663785 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1217 20:02:53.917544  663785 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1217 20:02:53.930572  663785 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1217 20:02:54.040621  660659 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1217 20:02:54.467513  660659 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1217 20:02:55.039280  660659 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1217 20:02:55.040168  660659 kubeadm.go:319] 
	I1217 20:02:55.040298  660659 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1217 20:02:55.040309  660659 kubeadm.go:319] 
	I1217 20:02:55.040421  660659 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1217 20:02:55.040431  660659 kubeadm.go:319] 
	I1217 20:02:55.040473  660659 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1217 20:02:55.040579  660659 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1217 20:02:55.040678  660659 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1217 20:02:55.040689  660659 kubeadm.go:319] 
	I1217 20:02:55.040776  660659 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1217 20:02:55.040783  660659 kubeadm.go:319] 
	I1217 20:02:55.040840  660659 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1217 20:02:55.040849  660659 kubeadm.go:319] 
	I1217 20:02:55.040915  660659 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1217 20:02:55.041018  660659 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1217 20:02:55.041167  660659 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1217 20:02:55.041178  660659 kubeadm.go:319] 
	I1217 20:02:55.041301  660659 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1217 20:02:55.041442  660659 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1217 20:02:55.041458  660659 kubeadm.go:319] 
	I1217 20:02:55.041600  660659 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token hb8mqj.5kxeg2f4381ik7ew \
	I1217 20:02:55.041749  660659 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:8ef867ecc15c7bd9eb9f87ba84e4b5e1f9c90bbe1fbebab60bd7b5b08cd9129f \
	I1217 20:02:55.041801  660659 kubeadm.go:319] 	--control-plane 
	I1217 20:02:55.041828  660659 kubeadm.go:319] 
	I1217 20:02:55.041970  660659 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1217 20:02:55.041982  660659 kubeadm.go:319] 
	I1217 20:02:55.042143  660659 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token hb8mqj.5kxeg2f4381ik7ew \
	I1217 20:02:55.042277  660659 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:8ef867ecc15c7bd9eb9f87ba84e4b5e1f9c90bbe1fbebab60bd7b5b08cd9129f 
	I1217 20:02:55.044669  660659 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1045-gcp\n", err: exit status 1
	I1217 20:02:55.044784  660659 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1217 20:02:55.044829  660659 cni.go:84] Creating CNI manager for ""
	I1217 20:02:55.044849  660659 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1217 20:02:55.047329  660659 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1217 20:02:53.931879  663785 addons.go:530] duration metric: took 2.37817208s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1217 20:02:54.408847  663785 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1217 20:02:54.416760  663785 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1217 20:02:54.416791  663785 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1217 20:02:54.909250  663785 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1217 20:02:54.914203  663785 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1217 20:02:54.915535  663785 api_server.go:141] control plane version: v1.34.3
	I1217 20:02:54.915564  663785 api_server.go:131] duration metric: took 1.006894599s to wait for apiserver health ...
	I1217 20:02:54.915574  663785 system_pods.go:43] waiting for kube-system pods to appear ...
	I1217 20:02:54.919072  663785 system_pods.go:59] 8 kube-system pods found
	I1217 20:02:54.919129  663785 system_pods.go:61] "coredns-66bc5c9577-wkvhv" [aa6b430f-e79f-4a53-b8c7-f51dd721cd13] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 20:02:54.919180  663785 system_pods.go:61] "etcd-embed-certs-147021" [e095b9b0-02a9-469c-b7ca-11f07e9e8bc2] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1217 20:02:54.919196  663785 system_pods.go:61] "kindnet-qp6z8" [2f98dd22-cea7-49e2-96b4-3025f53bda36] Running
	I1217 20:02:54.919205  663785 system_pods.go:61] "kube-apiserver-embed-certs-147021" [bcd316f8-903e-42ff-b60f-6509d564d602] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1217 20:02:54.919219  663785 system_pods.go:61] "kube-controller-manager-embed-certs-147021" [0e1fc59d-92f8-4bbe-acdf-1ea9e09712c6] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1217 20:02:54.919229  663785 system_pods.go:61] "kube-proxy-nwn9n" [6a7ffc94-190c-4ded-8331-cc243b65c2bc] Running
	I1217 20:02:54.919238  663785 system_pods.go:61] "kube-scheduler-embed-certs-147021" [b3ab0a37-453b-4772-a9d1-abc30a840479] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1217 20:02:54.919251  663785 system_pods.go:61] "storage-provisioner" [8515815b-8ad1-4db6-9c1e-ac36c14d42ce] Running
	I1217 20:02:54.919260  663785 system_pods.go:74] duration metric: took 3.677981ms to wait for pod list to return data ...
	I1217 20:02:54.919272  663785 default_sa.go:34] waiting for default service account to be created ...
	I1217 20:02:54.922107  663785 default_sa.go:45] found service account: "default"
	I1217 20:02:54.922131  663785 default_sa.go:55] duration metric: took 2.848308ms for default service account to be created ...
	I1217 20:02:54.922157  663785 system_pods.go:116] waiting for k8s-apps to be running ...
	I1217 20:02:54.925258  663785 system_pods.go:86] 8 kube-system pods found
	I1217 20:02:54.925292  663785 system_pods.go:89] "coredns-66bc5c9577-wkvhv" [aa6b430f-e79f-4a53-b8c7-f51dd721cd13] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 20:02:54.925301  663785 system_pods.go:89] "etcd-embed-certs-147021" [e095b9b0-02a9-469c-b7ca-11f07e9e8bc2] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1217 20:02:54.925309  663785 system_pods.go:89] "kindnet-qp6z8" [2f98dd22-cea7-49e2-96b4-3025f53bda36] Running
	I1217 20:02:54.925321  663785 system_pods.go:89] "kube-apiserver-embed-certs-147021" [bcd316f8-903e-42ff-b60f-6509d564d602] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1217 20:02:54.925333  663785 system_pods.go:89] "kube-controller-manager-embed-certs-147021" [0e1fc59d-92f8-4bbe-acdf-1ea9e09712c6] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1217 20:02:54.925339  663785 system_pods.go:89] "kube-proxy-nwn9n" [6a7ffc94-190c-4ded-8331-cc243b65c2bc] Running
	I1217 20:02:54.925347  663785 system_pods.go:89] "kube-scheduler-embed-certs-147021" [b3ab0a37-453b-4772-a9d1-abc30a840479] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1217 20:02:54.925354  663785 system_pods.go:89] "storage-provisioner" [8515815b-8ad1-4db6-9c1e-ac36c14d42ce] Running
	I1217 20:02:54.925365  663785 system_pods.go:126] duration metric: took 3.198444ms to wait for k8s-apps to be running ...
	I1217 20:02:54.925379  663785 system_svc.go:44] waiting for kubelet service to be running ....
	I1217 20:02:54.925433  663785 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 20:02:54.940571  663785 system_svc.go:56] duration metric: took 15.178781ms WaitForService to wait for kubelet
	I1217 20:02:54.940609  663785 kubeadm.go:587] duration metric: took 3.387213863s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1217 20:02:54.940632  663785 node_conditions.go:102] verifying NodePressure condition ...
	I1217 20:02:54.943838  663785 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1217 20:02:54.943868  663785 node_conditions.go:123] node cpu capacity is 8
	I1217 20:02:54.943898  663785 node_conditions.go:105] duration metric: took 3.26071ms to run NodePressure ...
	I1217 20:02:54.943916  663785 start.go:242] waiting for startup goroutines ...
	I1217 20:02:54.943931  663785 start.go:247] waiting for cluster config update ...
	I1217 20:02:54.943947  663785 start.go:256] writing updated cluster config ...
	I1217 20:02:54.944315  663785 ssh_runner.go:195] Run: rm -f paused
	I1217 20:02:54.948897  663785 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1217 20:02:54.953286  663785 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-wkvhv" in "kube-system" namespace to be "Ready" or be gone ...
	W1217 20:02:56.959837  663785 pod_ready.go:104] pod "coredns-66bc5c9577-wkvhv" is not "Ready", error: <nil>
	I1217 20:02:55.048632  660659 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1217 20:02:55.053717  660659 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.3/kubectl ...
	I1217 20:02:55.053744  660659 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2620 bytes)
	I1217 20:02:55.068850  660659 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1217 20:02:55.325231  660659 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 20:02:55.325418  660659 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1217 20:02:55.325519  660659 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes auto-601560 minikube.k8s.io/updated_at=2025_12_17T20_02_55_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=2e96f676eb7e96389e85fe0658a4ede4c4ba6924 minikube.k8s.io/name=auto-601560 minikube.k8s.io/primary=true
	I1217 20:02:55.421107  660659 ops.go:34] apiserver oom_adj: -16
	I1217 20:02:55.421287  660659 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 20:02:55.921425  660659 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 20:02:56.422283  660659 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 20:02:56.922156  660659 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 20:02:57.422356  660659 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 20:02:57.922321  660659 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 20:02:58.421494  660659 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	
	
	==> CRI-O <==
	Dec 17 20:02:28 default-k8s-diff-port-759234 crio[561]: time="2025-12-17T20:02:28.832568303Z" level=info msg="Started container" PID=1752 containerID=b38a80037849b30cd2cf40d496fdbb749638f3e661012a07d850981750660548 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-x5gq4/dashboard-metrics-scraper id=12a901bb-63c4-4700-8f2c-a58a2f23bb1d name=/runtime.v1.RuntimeService/StartContainer sandboxID=4908308aab9f665efa97273fe148688523a28e81a689c272e813270866425344
	Dec 17 20:02:28 default-k8s-diff-port-759234 crio[561]: time="2025-12-17T20:02:28.885703016Z" level=info msg="Removing container: 0273700cffedc2f692210434517e474073497b4ed366fd101d1863daa1e5fb9e" id=7e49898b-9e09-46bd-8e9e-a610506dc632 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 17 20:02:28 default-k8s-diff-port-759234 crio[561]: time="2025-12-17T20:02:28.907474546Z" level=info msg="Removed container 0273700cffedc2f692210434517e474073497b4ed366fd101d1863daa1e5fb9e: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-x5gq4/dashboard-metrics-scraper" id=7e49898b-9e09-46bd-8e9e-a610506dc632 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 17 20:02:36 default-k8s-diff-port-759234 crio[561]: time="2025-12-17T20:02:36.908561756Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=e2360bed-1f23-4a3e-880c-94b725861ca9 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 20:02:36 default-k8s-diff-port-759234 crio[561]: time="2025-12-17T20:02:36.938494629Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=82e5756a-f258-4d4e-b1ef-42c218919ae8 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 20:02:36 default-k8s-diff-port-759234 crio[561]: time="2025-12-17T20:02:36.939838651Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=90605ae6-10a5-484a-a470-533f41a2e36c name=/runtime.v1.RuntimeService/CreateContainer
	Dec 17 20:02:36 default-k8s-diff-port-759234 crio[561]: time="2025-12-17T20:02:36.940021465Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 20:02:37 default-k8s-diff-port-759234 crio[561]: time="2025-12-17T20:02:37.125914704Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 20:02:37 default-k8s-diff-port-759234 crio[561]: time="2025-12-17T20:02:37.126194368Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/c39a6b737bb47e246f2c14558fbb5573c2aa2aaa957c99a0a355b97fd6ead6b2/merged/etc/passwd: no such file or directory"
	Dec 17 20:02:37 default-k8s-diff-port-759234 crio[561]: time="2025-12-17T20:02:37.126246812Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/c39a6b737bb47e246f2c14558fbb5573c2aa2aaa957c99a0a355b97fd6ead6b2/merged/etc/group: no such file or directory"
	Dec 17 20:02:37 default-k8s-diff-port-759234 crio[561]: time="2025-12-17T20:02:37.126500173Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 20:02:37 default-k8s-diff-port-759234 crio[561]: time="2025-12-17T20:02:37.252426036Z" level=info msg="Created container b5fa7a549a8d242b4e3f2ea7764d147d6815e6a2a703c84f65f2d3f1d871969f: kube-system/storage-provisioner/storage-provisioner" id=90605ae6-10a5-484a-a470-533f41a2e36c name=/runtime.v1.RuntimeService/CreateContainer
	Dec 17 20:02:37 default-k8s-diff-port-759234 crio[561]: time="2025-12-17T20:02:37.253231454Z" level=info msg="Starting container: b5fa7a549a8d242b4e3f2ea7764d147d6815e6a2a703c84f65f2d3f1d871969f" id=24f608db-c308-4e8f-a1d5-78909cdfc4b6 name=/runtime.v1.RuntimeService/StartContainer
	Dec 17 20:02:37 default-k8s-diff-port-759234 crio[561]: time="2025-12-17T20:02:37.255436267Z" level=info msg="Started container" PID=1766 containerID=b5fa7a549a8d242b4e3f2ea7764d147d6815e6a2a703c84f65f2d3f1d871969f description=kube-system/storage-provisioner/storage-provisioner id=24f608db-c308-4e8f-a1d5-78909cdfc4b6 name=/runtime.v1.RuntimeService/StartContainer sandboxID=49d55feebb0b073ce42d0a893b9de78056480bc28ea663c3f29f72ae7e3c4694
	Dec 17 20:02:50 default-k8s-diff-port-759234 crio[561]: time="2025-12-17T20:02:50.776728185Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=176a73a5-965a-4449-8957-d4bf4f47871d name=/runtime.v1.ImageService/ImageStatus
	Dec 17 20:02:50 default-k8s-diff-port-759234 crio[561]: time="2025-12-17T20:02:50.778692197Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=ee68e35f-0f3d-4054-8402-2b41bd8af59f name=/runtime.v1.ImageService/ImageStatus
	Dec 17 20:02:50 default-k8s-diff-port-759234 crio[561]: time="2025-12-17T20:02:50.77982742Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-x5gq4/dashboard-metrics-scraper" id=fdd908fa-6752-4cb2-9eb4-4d8d0f31ad49 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 17 20:02:50 default-k8s-diff-port-759234 crio[561]: time="2025-12-17T20:02:50.779966778Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 20:02:50 default-k8s-diff-port-759234 crio[561]: time="2025-12-17T20:02:50.787280327Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 20:02:50 default-k8s-diff-port-759234 crio[561]: time="2025-12-17T20:02:50.788139047Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 20:02:50 default-k8s-diff-port-759234 crio[561]: time="2025-12-17T20:02:50.838823877Z" level=info msg="Created container cc3524e5a1365cf580ba863f7b11ab20cf3c5c9edb4e476ed6ee32739539386f: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-x5gq4/dashboard-metrics-scraper" id=fdd908fa-6752-4cb2-9eb4-4d8d0f31ad49 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 17 20:02:50 default-k8s-diff-port-759234 crio[561]: time="2025-12-17T20:02:50.839750581Z" level=info msg="Starting container: cc3524e5a1365cf580ba863f7b11ab20cf3c5c9edb4e476ed6ee32739539386f" id=cae4bf05-6514-4d87-9a28-e57264885f43 name=/runtime.v1.RuntimeService/StartContainer
	Dec 17 20:02:50 default-k8s-diff-port-759234 crio[561]: time="2025-12-17T20:02:50.842450716Z" level=info msg="Started container" PID=1802 containerID=cc3524e5a1365cf580ba863f7b11ab20cf3c5c9edb4e476ed6ee32739539386f description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-x5gq4/dashboard-metrics-scraper id=cae4bf05-6514-4d87-9a28-e57264885f43 name=/runtime.v1.RuntimeService/StartContainer sandboxID=4908308aab9f665efa97273fe148688523a28e81a689c272e813270866425344
	Dec 17 20:02:50 default-k8s-diff-port-759234 crio[561]: time="2025-12-17T20:02:50.955857561Z" level=info msg="Removing container: b38a80037849b30cd2cf40d496fdbb749638f3e661012a07d850981750660548" id=0aeef156-3fb7-4755-a1ee-285c93ac8947 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 17 20:02:50 default-k8s-diff-port-759234 crio[561]: time="2025-12-17T20:02:50.96930769Z" level=info msg="Removed container b38a80037849b30cd2cf40d496fdbb749638f3e661012a07d850981750660548: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-x5gq4/dashboard-metrics-scraper" id=0aeef156-3fb7-4755-a1ee-285c93ac8947 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                                    NAMESPACE
	cc3524e5a1365       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           9 seconds ago       Exited              dashboard-metrics-scraper   3                   4908308aab9f6       dashboard-metrics-scraper-6ffb444bf9-x5gq4             kubernetes-dashboard
	b5fa7a549a8d2       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           23 seconds ago      Running             storage-provisioner         1                   49d55feebb0b0       storage-provisioner                                    kube-system
	f01e59b3a5bec       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   45 seconds ago      Running             kubernetes-dashboard        0                   815e22192fce1       kubernetes-dashboard-855c9754f9-7lcjb                  kubernetes-dashboard
	f495d818556a6       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           54 seconds ago      Running             busybox                     1                   885f985ad7169       busybox                                                default
	1f92b0022b9d9       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           54 seconds ago      Running             coredns                     0                   bab7472b60116       coredns-66bc5c9577-lv4jd                               kube-system
	b6958cd5a4d6c       36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691                                           54 seconds ago      Running             kube-proxy                  0                   8980982c5b9f0       kube-proxy-ztxcd                                       kube-system
	5c35d460d84a2       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           54 seconds ago      Exited              storage-provisioner         0                   49d55feebb0b0       storage-provisioner                                    kube-system
	ff749e52a1c7b       4921d7a6dffa922dd679732ba4797085c4f39e9a53bee8b6fdb1d463e8571251                                           54 seconds ago      Running             kindnet-cni                 0                   a8018d2d3cff7       kindnet-dcwlb                                          kube-system
	d83a0fe0ebf9e       aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c                                           57 seconds ago      Running             kube-apiserver              0                   11954d3b28d67       kube-apiserver-default-k8s-diff-port-759234            kube-system
	13df285326623       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                           57 seconds ago      Running             etcd                        0                   b9b23e72adb41       etcd-default-k8s-diff-port-759234                      kube-system
	85ffda0bbbbe8       5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942                                           57 seconds ago      Running             kube-controller-manager     0                   5043885fb0536       kube-controller-manager-default-k8s-diff-port-759234   kube-system
	4d360a4c3fd6f       aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78                                           57 seconds ago      Running             kube-scheduler              0                   77b85d1ab6cbd       kube-scheduler-default-k8s-diff-port-759234            kube-system
	
	
	==> coredns [1f92b0022b9d9a916df843f4334eb7bbb4b21ace14628e070640e5df15619f23] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = c7556d8fdf49c5e32a9077be8cfb9fc6947bb07e663a10d55b192eb63ad1f2bd9793e8e5f5a36fc9abb1957831eec5c997fd9821790e3990ae9531bf41ecea37
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:53007 - 44118 "HINFO IN 2440421156102950590.91312000856898436. udp 55 false 512" NXDOMAIN qr,rd,ra 130 0.022480456s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-759234
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-759234
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2e96f676eb7e96389e85fe0658a4ede4c4ba6924
	                    minikube.k8s.io/name=default-k8s-diff-port-759234
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_17T20_01_06_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Dec 2025 20:01:02 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-759234
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Dec 2025 20:02:45 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Dec 2025 20:02:35 +0000   Wed, 17 Dec 2025 20:01:01 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Dec 2025 20:02:35 +0000   Wed, 17 Dec 2025 20:01:01 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Dec 2025 20:02:35 +0000   Wed, 17 Dec 2025 20:01:01 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Dec 2025 20:02:35 +0000   Wed, 17 Dec 2025 20:01:23 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    default-k8s-diff-port-759234
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 99cc213c06a11cdf07b2a4d26942818a
	  System UUID:                db8290dd-36ef-4726-9d3e-6ea726055ffb
	  Boot ID:                    832664c8-407a-4bff-a432-3bbc3f20421e
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.3
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         94s
	  kube-system                 coredns-66bc5c9577-lv4jd                                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     110s
	  kube-system                 etcd-default-k8s-diff-port-759234                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         115s
	  kube-system                 kindnet-dcwlb                                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      110s
	  kube-system                 kube-apiserver-default-k8s-diff-port-759234             250m (3%)     0 (0%)      0 (0%)           0 (0%)         115s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-759234    200m (2%)     0 (0%)      0 (0%)           0 (0%)         115s
	  kube-system                 kube-proxy-ztxcd                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         110s
	  kube-system                 kube-scheduler-default-k8s-diff-port-759234             100m (1%)     0 (0%)      0 (0%)           0 (0%)         115s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         109s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-x5gq4              0 (0%)        0 (0%)      0 (0%)           0 (0%)         51s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-7lcjb                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         51s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 109s               kube-proxy       
	  Normal  Starting                 54s                kube-proxy       
	  Normal  NodeHasSufficientMemory  115s               kubelet          Node default-k8s-diff-port-759234 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    115s               kubelet          Node default-k8s-diff-port-759234 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     115s               kubelet          Node default-k8s-diff-port-759234 status is now: NodeHasSufficientPID
	  Normal  Starting                 115s               kubelet          Starting kubelet.
	  Normal  RegisteredNode           111s               node-controller  Node default-k8s-diff-port-759234 event: Registered Node default-k8s-diff-port-759234 in Controller
	  Normal  NodeReady                97s                kubelet          Node default-k8s-diff-port-759234 status is now: NodeReady
	  Normal  Starting                 58s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  58s (x8 over 58s)  kubelet          Node default-k8s-diff-port-759234 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    58s (x8 over 58s)  kubelet          Node default-k8s-diff-port-759234 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     58s (x8 over 58s)  kubelet          Node default-k8s-diff-port-759234 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           52s                node-controller  Node default-k8s-diff-port-759234 event: Registered Node default-k8s-diff-port-759234 in Controller
	
	
	==> dmesg <==
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 02 bf cf fd 8a f3 08 06
	[  +0.000372] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 46 d7 50 f9 50 96 08 06
	[Dec17 19:26] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000011] ll header: 00000000: 12 b8 6e 1b fb 93 de a2 46 23 bd 1e 08 00
	[  +1.015318] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 12 b8 6e 1b fb 93 de a2 46 23 bd 1e 08 00
	[  +1.023837] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 12 b8 6e 1b fb 93 de a2 46 23 bd 1e 08 00
	[  +1.023872] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 12 b8 6e 1b fb 93 de a2 46 23 bd 1e 08 00
	[  +1.023881] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 12 b8 6e 1b fb 93 de a2 46 23 bd 1e 08 00
	[  +1.023899] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 12 b8 6e 1b fb 93 de a2 46 23 bd 1e 08 00
	[  +2.047807] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: 12 b8 6e 1b fb 93 de a2 46 23 bd 1e 08 00
	[  +4.031540] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000022] ll header: 00000000: 12 b8 6e 1b fb 93 de a2 46 23 bd 1e 08 00
	[  +8.319118] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: 12 b8 6e 1b fb 93 de a2 46 23 bd 1e 08 00
	[ +16.382218] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 12 b8 6e 1b fb 93 de a2 46 23 bd 1e 08 00
	[Dec17 19:27] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 12 b8 6e 1b fb 93 de a2 46 23 bd 1e 08 00
	
	
	==> etcd [13df2853266238c53f3daab51af6a83329ec267b44072f537e38af71a0078c3f] <==
	{"level":"warn","ts":"2025-12-17T20:02:04.573056Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54978","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T20:02:04.587324Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55004","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T20:02:04.594995Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55028","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T20:02:04.603974Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55032","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T20:02:04.612533Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55058","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T20:02:04.619973Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55064","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T20:02:04.626749Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55082","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T20:02:04.633968Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55092","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T20:02:04.641943Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55106","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T20:02:04.650762Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55128","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T20:02:04.660659Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55146","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T20:02:04.666350Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55162","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T20:02:04.673983Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55172","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T20:02:04.697352Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55184","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T20:02:04.705007Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55202","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T20:02:04.712132Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55234","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T20:02:04.760173Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55250","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-17T20:02:36.232680Z","caller":"traceutil/trace.go:172","msg":"trace[173342055] transaction","detail":"{read_only:false; response_revision:618; number_of_response:1; }","duration":"115.109336ms","start":"2025-12-17T20:02:36.117551Z","end":"2025-12-17T20:02:36.232660Z","steps":["trace[173342055] 'process raft request'  (duration: 115.066663ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-17T20:02:36.232685Z","caller":"traceutil/trace.go:172","msg":"trace[1087504951] transaction","detail":"{read_only:false; response_revision:617; number_of_response:1; }","duration":"168.637004ms","start":"2025-12-17T20:02:36.064025Z","end":"2025-12-17T20:02:36.232662Z","steps":["trace[1087504951] 'process raft request'  (duration: 119.697294ms)","trace[1087504951] 'compare'  (duration: 48.790717ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-17T20:02:37.255610Z","caller":"traceutil/trace.go:172","msg":"trace[1589292807] transaction","detail":"{read_only:false; response_revision:621; number_of_response:1; }","duration":"340.306273ms","start":"2025-12-17T20:02:36.915285Z","end":"2025-12-17T20:02:37.255591Z","steps":["trace[1589292807] 'process raft request'  (duration: 324.696059ms)","trace[1589292807] 'compare'  (duration: 15.265287ms)"],"step_count":2}
	{"level":"warn","ts":"2025-12-17T20:02:37.256028Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-12-17T20:02:36.915271Z","time spent":"340.402373ms","remote":"127.0.0.1:54454","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":4620,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/storage-provisioner\" mod_revision:479 > success:<request_put:<key:\"/registry/pods/kube-system/storage-provisioner\" value_size:4566 >> failure:<request_range:<key:\"/registry/pods/kube-system/storage-provisioner\" > >"}
	{"level":"warn","ts":"2025-12-17T20:02:37.514966Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"198.204146ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/coredns-66bc5c9577-lv4jd\" limit:1 ","response":"range_response_count:1 size:5944"}
	{"level":"info","ts":"2025-12-17T20:02:37.515131Z","caller":"traceutil/trace.go:172","msg":"trace[1645125940] range","detail":"{range_begin:/registry/pods/kube-system/coredns-66bc5c9577-lv4jd; range_end:; response_count:1; response_revision:621; }","duration":"198.34266ms","start":"2025-12-17T20:02:37.316729Z","end":"2025-12-17T20:02:37.515072Z","steps":["trace[1645125940] 'agreement among raft nodes before linearized reading'  (duration: 63.542438ms)","trace[1645125940] 'range keys from in-memory index tree'  (duration: 134.544753ms)"],"step_count":2}
	{"level":"warn","ts":"2025-12-17T20:02:37.515217Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"134.658384ms","expected-duration":"100ms","prefix":"","request":"header:<ID:6571766902728148880 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/kube-system/storage-provisioner.1882193695ac5561\" mod_revision:521 > success:<request_put:<key:\"/registry/events/kube-system/storage-provisioner.1882193695ac5561\" value_size:689 lease:6571766902728148355 >> failure:<request_range:<key:\"/registry/events/kube-system/storage-provisioner.1882193695ac5561\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-12-17T20:02:37.515296Z","caller":"traceutil/trace.go:172","msg":"trace[459163471] transaction","detail":"{read_only:false; response_revision:622; number_of_response:1; }","duration":"258.044415ms","start":"2025-12-17T20:02:37.257240Z","end":"2025-12-17T20:02:37.515284Z","steps":["trace[459163471] 'process raft request'  (duration: 123.074281ms)","trace[459163471] 'compare'  (duration: 134.480392ms)"],"step_count":2}
	
	
	==> kernel <==
	 20:03:00 up  1:45,  0 user,  load average: 5.49, 3.88, 2.64
	Linux default-k8s-diff-port-759234 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [ff749e52a1c7b238ec2a3b689c2471463861c44182ba71da511bc1f90ba22d68] <==
	I1217 20:02:06.369690       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1217 20:02:06.369959       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1217 20:02:06.370179       1 main.go:148] setting mtu 1500 for CNI 
	I1217 20:02:06.370207       1 main.go:178] kindnetd IP family: "ipv4"
	I1217 20:02:06.370233       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-17T20:02:06Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1217 20:02:06.573339       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1217 20:02:06.573469       1 controller.go:381] "Waiting for informer caches to sync"
	I1217 20:02:06.573486       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1217 20:02:06.573715       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1217 20:02:07.065752       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1217 20:02:07.065806       1 metrics.go:72] Registering metrics
	I1217 20:02:07.065916       1 controller.go:711] "Syncing nftables rules"
	I1217 20:02:16.573274       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1217 20:02:16.573344       1 main.go:301] handling current node
	I1217 20:02:26.581201       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1217 20:02:26.581241       1 main.go:301] handling current node
	I1217 20:02:36.573378       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1217 20:02:36.573470       1 main.go:301] handling current node
	I1217 20:02:46.577655       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1217 20:02:46.577692       1 main.go:301] handling current node
	I1217 20:02:56.577719       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1217 20:02:56.577903       1 main.go:301] handling current node
	
	
	==> kube-apiserver [d83a0fe0ebf9e431abfef83125000274ec881515d8b2fe37492a61682b8b7a56] <==
	I1217 20:02:05.328557       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1217 20:02:05.328576       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1217 20:02:05.328599       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1217 20:02:05.328661       1 aggregator.go:171] initial CRD sync complete...
	I1217 20:02:05.328669       1 autoregister_controller.go:144] Starting autoregister controller
	I1217 20:02:05.328674       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1217 20:02:05.328680       1 cache.go:39] Caches are synced for autoregister controller
	I1217 20:02:05.328996       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1217 20:02:05.329052       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1217 20:02:05.329058       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1217 20:02:05.337277       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1217 20:02:05.368274       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1217 20:02:05.376201       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1217 20:02:05.586336       1 controller.go:667] quota admission added evaluator for: namespaces
	I1217 20:02:05.616692       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1217 20:02:05.635369       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1217 20:02:05.643330       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1217 20:02:05.652264       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1217 20:02:05.686811       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.97.125.218"}
	I1217 20:02:05.696675       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.101.16.150"}
	I1217 20:02:06.232056       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1217 20:02:08.711987       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1217 20:02:08.907965       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1217 20:02:09.258172       1 controller.go:667] quota admission added evaluator for: endpoints
	I1217 20:02:09.258172       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [85ffda0bbbbe80bde1d1c7403094674a0f0d609d5aa8572f8c470fd845327c85] <==
	I1217 20:02:08.629495       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1217 20:02:08.643842       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1217 20:02:08.646989       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1217 20:02:08.649343       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1217 20:02:08.649358       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1217 20:02:08.650576       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1217 20:02:08.650600       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1217 20:02:08.652928       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1217 20:02:08.653849       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1217 20:02:08.653874       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1217 20:02:08.655047       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1217 20:02:08.655096       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1217 20:02:08.655142       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1217 20:02:08.655146       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1217 20:02:08.655170       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1217 20:02:08.655176       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1217 20:02:08.655187       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1217 20:02:08.660652       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1217 20:02:08.660695       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1217 20:02:08.660747       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1217 20:02:08.660808       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1217 20:02:08.660818       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1217 20:02:08.660826       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1217 20:02:08.670891       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1217 20:02:08.673202       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [b6958cd5a4d6c327cfb1850926f770862f2ba4f2b196595b819413ce72236040] <==
	I1217 20:02:06.157773       1 server_linux.go:53] "Using iptables proxy"
	I1217 20:02:06.246478       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1217 20:02:06.347111       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1217 20:02:06.347158       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.94.2"]
	E1217 20:02:06.347285       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1217 20:02:06.370538       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1217 20:02:06.370645       1 server_linux.go:132] "Using iptables Proxier"
	I1217 20:02:06.377302       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1217 20:02:06.377753       1 server.go:527] "Version info" version="v1.34.3"
	I1217 20:02:06.377790       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1217 20:02:06.380476       1 config.go:200] "Starting service config controller"
	I1217 20:02:06.380521       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1217 20:02:06.380543       1 config.go:106] "Starting endpoint slice config controller"
	I1217 20:02:06.380548       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1217 20:02:06.380561       1 config.go:403] "Starting serviceCIDR config controller"
	I1217 20:02:06.380566       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1217 20:02:06.380623       1 config.go:309] "Starting node config controller"
	I1217 20:02:06.380639       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1217 20:02:06.380654       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1217 20:02:06.480636       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1217 20:02:06.480666       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1217 20:02:06.480638       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [4d360a4c3fd6f7b37c23d2fae6316c0a6398e536b4ed3c70d59262bc9cbab9c7] <==
	I1217 20:02:04.031926       1 serving.go:386] Generated self-signed cert in-memory
	I1217 20:02:05.340623       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.3"
	I1217 20:02:05.340660       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1217 20:02:05.346530       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1217 20:02:05.346545       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1217 20:02:05.346581       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1217 20:02:05.346586       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1217 20:02:05.346548       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1217 20:02:05.346650       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1217 20:02:05.347003       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1217 20:02:05.347229       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1217 20:02:05.446829       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1217 20:02:05.446865       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1217 20:02:05.446955       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	
	
	==> kubelet <==
	Dec 17 20:02:10 default-k8s-diff-port-759234 kubelet[721]: I1217 20:02:10.776156     721 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Dec 17 20:02:11 default-k8s-diff-port-759234 kubelet[721]: I1217 20:02:11.819888     721 scope.go:117] "RemoveContainer" containerID="955ea5817f1e8123df22b35943e083afd2bd7df677501593ab64e9e943f06bc1"
	Dec 17 20:02:12 default-k8s-diff-port-759234 kubelet[721]: I1217 20:02:12.825147     721 scope.go:117] "RemoveContainer" containerID="955ea5817f1e8123df22b35943e083afd2bd7df677501593ab64e9e943f06bc1"
	Dec 17 20:02:12 default-k8s-diff-port-759234 kubelet[721]: I1217 20:02:12.825722     721 scope.go:117] "RemoveContainer" containerID="0273700cffedc2f692210434517e474073497b4ed366fd101d1863daa1e5fb9e"
	Dec 17 20:02:12 default-k8s-diff-port-759234 kubelet[721]: E1217 20:02:12.826025     721 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-x5gq4_kubernetes-dashboard(f5a128b1-a105-4cdb-aa21-3f46e23e8ea6)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-x5gq4" podUID="f5a128b1-a105-4cdb-aa21-3f46e23e8ea6"
	Dec 17 20:02:13 default-k8s-diff-port-759234 kubelet[721]: I1217 20:02:13.830713     721 scope.go:117] "RemoveContainer" containerID="0273700cffedc2f692210434517e474073497b4ed366fd101d1863daa1e5fb9e"
	Dec 17 20:02:13 default-k8s-diff-port-759234 kubelet[721]: E1217 20:02:13.831390     721 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-x5gq4_kubernetes-dashboard(f5a128b1-a105-4cdb-aa21-3f46e23e8ea6)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-x5gq4" podUID="f5a128b1-a105-4cdb-aa21-3f46e23e8ea6"
	Dec 17 20:02:15 default-k8s-diff-port-759234 kubelet[721]: I1217 20:02:15.887124     721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-7lcjb" podStartSLOduration=1.200522431 podStartE2EDuration="6.887098261s" podCreationTimestamp="2025-12-17 20:02:09 +0000 UTC" firstStartedPulling="2025-12-17 20:02:09.503837796 +0000 UTC m=+6.822134741" lastFinishedPulling="2025-12-17 20:02:15.19041362 +0000 UTC m=+12.508710571" observedRunningTime="2025-12-17 20:02:15.886739508 +0000 UTC m=+13.205036468" watchObservedRunningTime="2025-12-17 20:02:15.887098261 +0000 UTC m=+13.205395222"
	Dec 17 20:02:16 default-k8s-diff-port-759234 kubelet[721]: I1217 20:02:16.056401     721 scope.go:117] "RemoveContainer" containerID="0273700cffedc2f692210434517e474073497b4ed366fd101d1863daa1e5fb9e"
	Dec 17 20:02:16 default-k8s-diff-port-759234 kubelet[721]: E1217 20:02:16.056622     721 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-x5gq4_kubernetes-dashboard(f5a128b1-a105-4cdb-aa21-3f46e23e8ea6)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-x5gq4" podUID="f5a128b1-a105-4cdb-aa21-3f46e23e8ea6"
	Dec 17 20:02:28 default-k8s-diff-port-759234 kubelet[721]: I1217 20:02:28.773833     721 scope.go:117] "RemoveContainer" containerID="0273700cffedc2f692210434517e474073497b4ed366fd101d1863daa1e5fb9e"
	Dec 17 20:02:28 default-k8s-diff-port-759234 kubelet[721]: I1217 20:02:28.882842     721 scope.go:117] "RemoveContainer" containerID="b38a80037849b30cd2cf40d496fdbb749638f3e661012a07d850981750660548"
	Dec 17 20:02:28 default-k8s-diff-port-759234 kubelet[721]: E1217 20:02:28.883183     721 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-x5gq4_kubernetes-dashboard(f5a128b1-a105-4cdb-aa21-3f46e23e8ea6)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-x5gq4" podUID="f5a128b1-a105-4cdb-aa21-3f46e23e8ea6"
	Dec 17 20:02:28 default-k8s-diff-port-759234 kubelet[721]: I1217 20:02:28.883915     721 scope.go:117] "RemoveContainer" containerID="0273700cffedc2f692210434517e474073497b4ed366fd101d1863daa1e5fb9e"
	Dec 17 20:02:36 default-k8s-diff-port-759234 kubelet[721]: I1217 20:02:36.056541     721 scope.go:117] "RemoveContainer" containerID="b38a80037849b30cd2cf40d496fdbb749638f3e661012a07d850981750660548"
	Dec 17 20:02:36 default-k8s-diff-port-759234 kubelet[721]: E1217 20:02:36.056839     721 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-x5gq4_kubernetes-dashboard(f5a128b1-a105-4cdb-aa21-3f46e23e8ea6)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-x5gq4" podUID="f5a128b1-a105-4cdb-aa21-3f46e23e8ea6"
	Dec 17 20:02:36 default-k8s-diff-port-759234 kubelet[721]: I1217 20:02:36.908121     721 scope.go:117] "RemoveContainer" containerID="5c35d460d84a27be34da42a759162cb5bc58518237744639622166b502cc652a"
	Dec 17 20:02:50 default-k8s-diff-port-759234 kubelet[721]: I1217 20:02:50.775855     721 scope.go:117] "RemoveContainer" containerID="b38a80037849b30cd2cf40d496fdbb749638f3e661012a07d850981750660548"
	Dec 17 20:02:50 default-k8s-diff-port-759234 kubelet[721]: I1217 20:02:50.953779     721 scope.go:117] "RemoveContainer" containerID="cc3524e5a1365cf580ba863f7b11ab20cf3c5c9edb4e476ed6ee32739539386f"
	Dec 17 20:02:50 default-k8s-diff-port-759234 kubelet[721]: I1217 20:02:50.953944     721 scope.go:117] "RemoveContainer" containerID="b38a80037849b30cd2cf40d496fdbb749638f3e661012a07d850981750660548"
	Dec 17 20:02:50 default-k8s-diff-port-759234 kubelet[721]: E1217 20:02:50.953957     721 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-x5gq4_kubernetes-dashboard(f5a128b1-a105-4cdb-aa21-3f46e23e8ea6)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-x5gq4" podUID="f5a128b1-a105-4cdb-aa21-3f46e23e8ea6"
	Dec 17 20:02:54 default-k8s-diff-port-759234 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 17 20:02:54 default-k8s-diff-port-759234 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 17 20:02:54 default-k8s-diff-port-759234 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 20:02:54 default-k8s-diff-port-759234 systemd[1]: kubelet.service: Consumed 1.869s CPU time.
	
	
	==> kubernetes-dashboard [f01e59b3a5bec96adc422b58a3f2d145f5ded1ce16afc6fa1bdf3418adf64dc8] <==
	2025/12/17 20:02:15 Starting overwatch
	2025/12/17 20:02:15 Using namespace: kubernetes-dashboard
	2025/12/17 20:02:15 Using in-cluster config to connect to apiserver
	2025/12/17 20:02:15 Using secret token for csrf signing
	2025/12/17 20:02:15 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/12/17 20:02:15 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/12/17 20:02:15 Successful initial request to the apiserver, version: v1.34.3
	2025/12/17 20:02:15 Generating JWE encryption key
	2025/12/17 20:02:15 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/12/17 20:02:15 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/12/17 20:02:15 Initializing JWE encryption key from synchronized object
	2025/12/17 20:02:15 Creating in-cluster Sidecar client
	2025/12/17 20:02:15 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/17 20:02:15 Serving insecurely on HTTP port: 9090
	2025/12/17 20:02:45 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [5c35d460d84a27be34da42a759162cb5bc58518237744639622166b502cc652a] <==
	I1217 20:02:06.129244       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1217 20:02:36.132634       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [b5fa7a549a8d242b4e3f2ea7764d147d6815e6a2a703c84f65f2d3f1d871969f] <==
	I1217 20:02:37.560402       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1217 20:02:37.568552       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1217 20:02:37.568612       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1217 20:02:37.570983       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 20:02:41.025793       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 20:02:45.287017       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 20:02:48.885683       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 20:02:51.939308       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 20:02:54.961793       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 20:02:54.966984       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1217 20:02:54.967160       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1217 20:02:54.967275       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"ef3dee07-d1ce-418e-a6ba-4a2d4546a253", APIVersion:"v1", ResourceVersion:"638", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-759234_53320cc2-2157-4e67-a487-aa131a78f9f7 became leader
	I1217 20:02:54.967336       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-759234_53320cc2-2157-4e67-a487-aa131a78f9f7!
	W1217 20:02:54.971358       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 20:02:54.975624       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1217 20:02:55.067754       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-759234_53320cc2-2157-4e67-a487-aa131a78f9f7!
	W1217 20:02:56.978356       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 20:02:56.982550       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 20:02:58.987772       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 20:02:58.999670       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 20:03:01.003041       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 20:03:01.008170       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-759234 -n default-k8s-diff-port-759234
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-759234 -n default-k8s-diff-port-759234: exit status 2 (421.322845ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context default-k8s-diff-port-759234 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (7.38s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (7.75s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-147021 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p embed-certs-147021 --alsologtostderr -v=1: exit status 80 (2.668626899s)

                                                
                                                
-- stdout --
	* Pausing node embed-certs-147021 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1217 20:03:39.315248  677765 out.go:360] Setting OutFile to fd 1 ...
	I1217 20:03:39.315805  677765 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 20:03:39.315818  677765 out.go:374] Setting ErrFile to fd 2...
	I1217 20:03:39.315823  677765 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 20:03:39.316463  677765 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22186-372245/.minikube/bin
	I1217 20:03:39.316916  677765 out.go:368] Setting JSON to false
	I1217 20:03:39.316988  677765 mustload.go:66] Loading cluster: embed-certs-147021
	I1217 20:03:39.317856  677765 config.go:182] Loaded profile config "embed-certs-147021": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 20:03:39.318943  677765 cli_runner.go:164] Run: docker container inspect embed-certs-147021 --format={{.State.Status}}
	I1217 20:03:39.344908  677765 host.go:66] Checking if "embed-certs-147021" exists ...
	I1217 20:03:39.345599  677765 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 20:03:39.425511  677765 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:78 OomKillDisable:false NGoroutines:84 SystemTime:2025-12-17 20:03:39.411544008 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1217 20:03:39.426441  677765 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/22186/minikube-v1.37.0-1765965980-22186-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1765965980-22186/minikube-v1.37.0-1765965980-22186-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1765965980-22186-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:embed-certs-147021 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true
) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1217 20:03:39.428734  677765 out.go:179] * Pausing node embed-certs-147021 ... 
	I1217 20:03:39.430516  677765 host.go:66] Checking if "embed-certs-147021" exists ...
	I1217 20:03:39.430857  677765 ssh_runner.go:195] Run: systemctl --version
	I1217 20:03:39.430905  677765 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-147021
	I1217 20:03:39.458378  677765 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33488 SSHKeyPath:/home/jenkins/minikube-integration/22186-372245/.minikube/machines/embed-certs-147021/id_rsa Username:docker}
	I1217 20:03:39.572743  677765 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 20:03:39.592832  677765 pause.go:52] kubelet running: true
	I1217 20:03:39.592951  677765 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1217 20:03:39.826268  677765 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1217 20:03:39.826396  677765 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1217 20:03:39.918779  677765 cri.go:89] found id: "a97831dd0cfa9d42e8bd7fafa0510d4d2b2a18070aac74aa247e55852e8e114e"
	I1217 20:03:39.918809  677765 cri.go:89] found id: "42c9fb76fa6175d615c9c78f7030f741afc7310992f335396b1970fe704fefae"
	I1217 20:03:39.918817  677765 cri.go:89] found id: "2766a8fcb5ebd7aeee551794853fcba5d9153eca108dbbefaecfd962e38c5f3d"
	I1217 20:03:39.918822  677765 cri.go:89] found id: "537a5407ce604a89aeaa3dfb925609467a6bd3eeb7abd61d4ca526f32aafd92b"
	I1217 20:03:39.918827  677765 cri.go:89] found id: "138ac303d832d356d24635c198a00e7be358427c23bd8fdce8ba3aa0818c1350"
	I1217 20:03:39.918841  677765 cri.go:89] found id: "908edcd5f5289ef7311867639a5128a59a15dad0583e878557accbf26efa79fb"
	I1217 20:03:39.918846  677765 cri.go:89] found id: "9609c0cfa32a680d1b01f25906eb3fc99966c8e66cc7b424a4aaf43f25353e40"
	I1217 20:03:39.918851  677765 cri.go:89] found id: "65e71064f45025b16a8eeb57a2312f4a95a800aca4e77340fff8eb1b3e67c18d"
	I1217 20:03:39.918855  677765 cri.go:89] found id: "d703ea40f171a6defb08dbaa7f51e4cb839d82c4c6df2ff17c3ac6931834a231"
	I1217 20:03:39.919025  677765 cri.go:89] found id: "7d20bd215cfe13c8e4ea6af1ef233c20548a09cd11187637eda9e9466894c33b"
	I1217 20:03:39.919041  677765 cri.go:89] found id: "4ebfa66d3b28eddecbfe86a86aaad09d79b307b1c6cdf47b395f4d1eba9148bf"
	I1217 20:03:39.919045  677765 cri.go:89] found id: ""
	I1217 20:03:39.919153  677765 ssh_runner.go:195] Run: sudo runc list -f json
	I1217 20:03:39.938486  677765 retry.go:31] will retry after 175.079671ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T20:03:39Z" level=error msg="open /run/runc: no such file or directory"
	I1217 20:03:40.113808  677765 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 20:03:40.130839  677765 pause.go:52] kubelet running: false
	I1217 20:03:40.130905  677765 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1217 20:03:40.345846  677765 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1217 20:03:40.345962  677765 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1217 20:03:40.440352  677765 cri.go:89] found id: "a97831dd0cfa9d42e8bd7fafa0510d4d2b2a18070aac74aa247e55852e8e114e"
	I1217 20:03:40.440387  677765 cri.go:89] found id: "42c9fb76fa6175d615c9c78f7030f741afc7310992f335396b1970fe704fefae"
	I1217 20:03:40.440393  677765 cri.go:89] found id: "2766a8fcb5ebd7aeee551794853fcba5d9153eca108dbbefaecfd962e38c5f3d"
	I1217 20:03:40.440399  677765 cri.go:89] found id: "537a5407ce604a89aeaa3dfb925609467a6bd3eeb7abd61d4ca526f32aafd92b"
	I1217 20:03:40.440404  677765 cri.go:89] found id: "138ac303d832d356d24635c198a00e7be358427c23bd8fdce8ba3aa0818c1350"
	I1217 20:03:40.440410  677765 cri.go:89] found id: "908edcd5f5289ef7311867639a5128a59a15dad0583e878557accbf26efa79fb"
	I1217 20:03:40.440414  677765 cri.go:89] found id: "9609c0cfa32a680d1b01f25906eb3fc99966c8e66cc7b424a4aaf43f25353e40"
	I1217 20:03:40.440419  677765 cri.go:89] found id: "65e71064f45025b16a8eeb57a2312f4a95a800aca4e77340fff8eb1b3e67c18d"
	I1217 20:03:40.440424  677765 cri.go:89] found id: "d703ea40f171a6defb08dbaa7f51e4cb839d82c4c6df2ff17c3ac6931834a231"
	I1217 20:03:40.440441  677765 cri.go:89] found id: "7d20bd215cfe13c8e4ea6af1ef233c20548a09cd11187637eda9e9466894c33b"
	I1217 20:03:40.440446  677765 cri.go:89] found id: "4ebfa66d3b28eddecbfe86a86aaad09d79b307b1c6cdf47b395f4d1eba9148bf"
	I1217 20:03:40.440450  677765 cri.go:89] found id: ""
	I1217 20:03:40.440509  677765 ssh_runner.go:195] Run: sudo runc list -f json
	I1217 20:03:40.457198  677765 retry.go:31] will retry after 350.679596ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T20:03:40Z" level=error msg="open /run/runc: no such file or directory"
	I1217 20:03:40.808820  677765 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 20:03:40.827513  677765 pause.go:52] kubelet running: false
	I1217 20:03:40.827579  677765 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1217 20:03:41.045458  677765 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1217 20:03:41.045547  677765 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1217 20:03:41.141798  677765 cri.go:89] found id: "a97831dd0cfa9d42e8bd7fafa0510d4d2b2a18070aac74aa247e55852e8e114e"
	I1217 20:03:41.141831  677765 cri.go:89] found id: "42c9fb76fa6175d615c9c78f7030f741afc7310992f335396b1970fe704fefae"
	I1217 20:03:41.141845  677765 cri.go:89] found id: "2766a8fcb5ebd7aeee551794853fcba5d9153eca108dbbefaecfd962e38c5f3d"
	I1217 20:03:41.141850  677765 cri.go:89] found id: "537a5407ce604a89aeaa3dfb925609467a6bd3eeb7abd61d4ca526f32aafd92b"
	I1217 20:03:41.141855  677765 cri.go:89] found id: "138ac303d832d356d24635c198a00e7be358427c23bd8fdce8ba3aa0818c1350"
	I1217 20:03:41.141860  677765 cri.go:89] found id: "908edcd5f5289ef7311867639a5128a59a15dad0583e878557accbf26efa79fb"
	I1217 20:03:41.141864  677765 cri.go:89] found id: "9609c0cfa32a680d1b01f25906eb3fc99966c8e66cc7b424a4aaf43f25353e40"
	I1217 20:03:41.141869  677765 cri.go:89] found id: "65e71064f45025b16a8eeb57a2312f4a95a800aca4e77340fff8eb1b3e67c18d"
	I1217 20:03:41.141873  677765 cri.go:89] found id: "d703ea40f171a6defb08dbaa7f51e4cb839d82c4c6df2ff17c3ac6931834a231"
	I1217 20:03:41.141881  677765 cri.go:89] found id: "7d20bd215cfe13c8e4ea6af1ef233c20548a09cd11187637eda9e9466894c33b"
	I1217 20:03:41.141885  677765 cri.go:89] found id: "4ebfa66d3b28eddecbfe86a86aaad09d79b307b1c6cdf47b395f4d1eba9148bf"
	I1217 20:03:41.141889  677765 cri.go:89] found id: ""
	I1217 20:03:41.141936  677765 ssh_runner.go:195] Run: sudo runc list -f json
	I1217 20:03:41.158047  677765 retry.go:31] will retry after 371.685414ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T20:03:41Z" level=error msg="open /run/runc: no such file or directory"
	I1217 20:03:41.530431  677765 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 20:03:41.549507  677765 pause.go:52] kubelet running: false
	I1217 20:03:41.549571  677765 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1217 20:03:41.754193  677765 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1217 20:03:41.754316  677765 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1217 20:03:41.855853  677765 cri.go:89] found id: "a97831dd0cfa9d42e8bd7fafa0510d4d2b2a18070aac74aa247e55852e8e114e"
	I1217 20:03:41.855881  677765 cri.go:89] found id: "42c9fb76fa6175d615c9c78f7030f741afc7310992f335396b1970fe704fefae"
	I1217 20:03:41.855889  677765 cri.go:89] found id: "2766a8fcb5ebd7aeee551794853fcba5d9153eca108dbbefaecfd962e38c5f3d"
	I1217 20:03:41.855894  677765 cri.go:89] found id: "537a5407ce604a89aeaa3dfb925609467a6bd3eeb7abd61d4ca526f32aafd92b"
	I1217 20:03:41.855936  677765 cri.go:89] found id: "138ac303d832d356d24635c198a00e7be358427c23bd8fdce8ba3aa0818c1350"
	I1217 20:03:41.855943  677765 cri.go:89] found id: "908edcd5f5289ef7311867639a5128a59a15dad0583e878557accbf26efa79fb"
	I1217 20:03:41.855947  677765 cri.go:89] found id: "9609c0cfa32a680d1b01f25906eb3fc99966c8e66cc7b424a4aaf43f25353e40"
	I1217 20:03:41.855952  677765 cri.go:89] found id: "65e71064f45025b16a8eeb57a2312f4a95a800aca4e77340fff8eb1b3e67c18d"
	I1217 20:03:41.855957  677765 cri.go:89] found id: "d703ea40f171a6defb08dbaa7f51e4cb839d82c4c6df2ff17c3ac6931834a231"
	I1217 20:03:41.855976  677765 cri.go:89] found id: "7d20bd215cfe13c8e4ea6af1ef233c20548a09cd11187637eda9e9466894c33b"
	I1217 20:03:41.855980  677765 cri.go:89] found id: "4ebfa66d3b28eddecbfe86a86aaad09d79b307b1c6cdf47b395f4d1eba9148bf"
	I1217 20:03:41.855985  677765 cri.go:89] found id: ""
	I1217 20:03:41.856125  677765 ssh_runner.go:195] Run: sudo runc list -f json
	I1217 20:03:41.877941  677765 out.go:203] 
	W1217 20:03:41.879344  677765 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T20:03:41Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T20:03:41Z" level=error msg="open /run/runc: no such file or directory"
	
	W1217 20:03:41.879379  677765 out.go:285] * 
	* 
	W1217 20:03:41.887883  677765 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1217 20:03:41.889542  677765 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p embed-certs-147021 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect embed-certs-147021
helpers_test.go:244: (dbg) docker inspect embed-certs-147021:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "83dda83adbe19d01d49a5760f6d4c64b7758728b6bba04deace62e55f005deb8",
	        "Created": "2025-12-17T20:01:40.099829209Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 664321,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-17T20:02:43.901732416Z",
	            "FinishedAt": "2025-12-17T20:02:41.982920333Z"
	        },
	        "Image": "sha256:e3abeb065413b7566dd42e98e204ab3ad174790743f1f5cd427036c11b49d7f1",
	        "ResolvConfPath": "/var/lib/docker/containers/83dda83adbe19d01d49a5760f6d4c64b7758728b6bba04deace62e55f005deb8/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/83dda83adbe19d01d49a5760f6d4c64b7758728b6bba04deace62e55f005deb8/hostname",
	        "HostsPath": "/var/lib/docker/containers/83dda83adbe19d01d49a5760f6d4c64b7758728b6bba04deace62e55f005deb8/hosts",
	        "LogPath": "/var/lib/docker/containers/83dda83adbe19d01d49a5760f6d4c64b7758728b6bba04deace62e55f005deb8/83dda83adbe19d01d49a5760f6d4c64b7758728b6bba04deace62e55f005deb8-json.log",
	        "Name": "/embed-certs-147021",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-147021:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "embed-certs-147021",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "83dda83adbe19d01d49a5760f6d4c64b7758728b6bba04deace62e55f005deb8",
	                "LowerDir": "/var/lib/docker/overlay2/a2bd0701b2a8182e8c812ff61b8a44b36e1fa0dbd92285a2851592ab9f71eb11-init/diff:/var/lib/docker/overlay2/29727d664a8119dcd8d22d923cfdfa7d86f99088879bf2a113d907b51116eb38/diff",
	                "MergedDir": "/var/lib/docker/overlay2/a2bd0701b2a8182e8c812ff61b8a44b36e1fa0dbd92285a2851592ab9f71eb11/merged",
	                "UpperDir": "/var/lib/docker/overlay2/a2bd0701b2a8182e8c812ff61b8a44b36e1fa0dbd92285a2851592ab9f71eb11/diff",
	                "WorkDir": "/var/lib/docker/overlay2/a2bd0701b2a8182e8c812ff61b8a44b36e1fa0dbd92285a2851592ab9f71eb11/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "embed-certs-147021",
	                "Source": "/var/lib/docker/volumes/embed-certs-147021/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-147021",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-147021",
	                "name.minikube.sigs.k8s.io": "embed-certs-147021",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "cc26056bc6f76de2c3b659736415471e992f388dd5a85151decc80a15cb978ce",
	            "SandboxKey": "/var/run/docker/netns/cc26056bc6f7",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33488"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33489"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33492"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33490"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33491"
	                    }
	                ]
	            },
	            "Networks": {
	                "embed-certs-147021": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "d0eb8a5e286382abd016e9750b18658c10571b76b24cafa91dc20ab0a3e49d6a",
	                    "EndpointID": "fb72742245b5dc815cd26486471925d63a94e962c5b56408fbc6074e8f348698",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "MacAddress": "f2:6f:c3:63:be:c2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-147021",
	                        "83dda83adbe1"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-147021 -n embed-certs-147021
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-147021 -n embed-certs-147021: exit status 2 (447.62271ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-147021 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-147021 logs -n 25: (1.864907855s)
helpers_test.go:261: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────┬────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                     ARGS                                     │      PROFILE       │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────┼────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p kindnet-601560 pgrep -a kubelet                                           │ kindnet-601560     │ jenkins │ v1.37.0 │ 17 Dec 25 20:03 UTC │ 17 Dec 25 20:03 UTC │
	│ ssh     │ -p auto-601560 sudo ip a s                                                   │ auto-601560        │ jenkins │ v1.37.0 │ 17 Dec 25 20:03 UTC │ 17 Dec 25 20:03 UTC │
	│ ssh     │ -p auto-601560 sudo ip r s                                                   │ auto-601560        │ jenkins │ v1.37.0 │ 17 Dec 25 20:03 UTC │ 17 Dec 25 20:03 UTC │
	│ ssh     │ -p auto-601560 sudo iptables-save                                            │ auto-601560        │ jenkins │ v1.37.0 │ 17 Dec 25 20:03 UTC │ 17 Dec 25 20:03 UTC │
	│ ssh     │ -p auto-601560 sudo iptables -t nat -L -n -v                                 │ auto-601560        │ jenkins │ v1.37.0 │ 17 Dec 25 20:03 UTC │ 17 Dec 25 20:03 UTC │
	│ ssh     │ -p auto-601560 sudo systemctl status kubelet --all --full --no-pager         │ auto-601560        │ jenkins │ v1.37.0 │ 17 Dec 25 20:03 UTC │ 17 Dec 25 20:03 UTC │
	│ ssh     │ -p auto-601560 sudo systemctl cat kubelet --no-pager                         │ auto-601560        │ jenkins │ v1.37.0 │ 17 Dec 25 20:03 UTC │ 17 Dec 25 20:03 UTC │
	│ ssh     │ -p auto-601560 sudo journalctl -xeu kubelet --all --full --no-pager          │ auto-601560        │ jenkins │ v1.37.0 │ 17 Dec 25 20:03 UTC │ 17 Dec 25 20:03 UTC │
	│ ssh     │ -p auto-601560 sudo cat /etc/kubernetes/kubelet.conf                         │ auto-601560        │ jenkins │ v1.37.0 │ 17 Dec 25 20:03 UTC │ 17 Dec 25 20:03 UTC │
	│ ssh     │ -p auto-601560 sudo cat /var/lib/kubelet/config.yaml                         │ auto-601560        │ jenkins │ v1.37.0 │ 17 Dec 25 20:03 UTC │ 17 Dec 25 20:03 UTC │
	│ ssh     │ -p auto-601560 sudo systemctl status docker --all --full --no-pager          │ auto-601560        │ jenkins │ v1.37.0 │ 17 Dec 25 20:03 UTC │                     │
	│ ssh     │ -p auto-601560 sudo systemctl cat docker --no-pager                          │ auto-601560        │ jenkins │ v1.37.0 │ 17 Dec 25 20:03 UTC │ 17 Dec 25 20:03 UTC │
	│ ssh     │ -p auto-601560 sudo cat /etc/docker/daemon.json                              │ auto-601560        │ jenkins │ v1.37.0 │ 17 Dec 25 20:03 UTC │                     │
	│ ssh     │ -p auto-601560 sudo docker system info                                       │ auto-601560        │ jenkins │ v1.37.0 │ 17 Dec 25 20:03 UTC │                     │
	│ image   │ embed-certs-147021 image list --format=json                                  │ embed-certs-147021 │ jenkins │ v1.37.0 │ 17 Dec 25 20:03 UTC │ 17 Dec 25 20:03 UTC │
	│ ssh     │ -p auto-601560 sudo systemctl status cri-docker --all --full --no-pager      │ auto-601560        │ jenkins │ v1.37.0 │ 17 Dec 25 20:03 UTC │                     │
	│ pause   │ -p embed-certs-147021 --alsologtostderr -v=1                                 │ embed-certs-147021 │ jenkins │ v1.37.0 │ 17 Dec 25 20:03 UTC │                     │
	│ ssh     │ -p auto-601560 sudo systemctl cat cri-docker --no-pager                      │ auto-601560        │ jenkins │ v1.37.0 │ 17 Dec 25 20:03 UTC │ 17 Dec 25 20:03 UTC │
	│ ssh     │ -p auto-601560 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf │ auto-601560        │ jenkins │ v1.37.0 │ 17 Dec 25 20:03 UTC │                     │
	│ ssh     │ -p auto-601560 sudo cat /usr/lib/systemd/system/cri-docker.service           │ auto-601560        │ jenkins │ v1.37.0 │ 17 Dec 25 20:03 UTC │ 17 Dec 25 20:03 UTC │
	│ ssh     │ -p auto-601560 sudo cri-dockerd --version                                    │ auto-601560        │ jenkins │ v1.37.0 │ 17 Dec 25 20:03 UTC │ 17 Dec 25 20:03 UTC │
	│ ssh     │ -p auto-601560 sudo systemctl status containerd --all --full --no-pager      │ auto-601560        │ jenkins │ v1.37.0 │ 17 Dec 25 20:03 UTC │                     │
	│ ssh     │ -p auto-601560 sudo systemctl cat containerd --no-pager                      │ auto-601560        │ jenkins │ v1.37.0 │ 17 Dec 25 20:03 UTC │ 17 Dec 25 20:03 UTC │
	│ ssh     │ -p auto-601560 sudo cat /lib/systemd/system/containerd.service               │ auto-601560        │ jenkins │ v1.37.0 │ 17 Dec 25 20:03 UTC │ 17 Dec 25 20:03 UTC │
	│ ssh     │ -p auto-601560 sudo cat /etc/containerd/config.toml                          │ auto-601560        │ jenkins │ v1.37.0 │ 17 Dec 25 20:03 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────┴────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/17 20:03:06
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1217 20:03:06.319588  670841 out.go:360] Setting OutFile to fd 1 ...
	I1217 20:03:06.319905  670841 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 20:03:06.319917  670841 out.go:374] Setting ErrFile to fd 2...
	I1217 20:03:06.319922  670841 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 20:03:06.320211  670841 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22186-372245/.minikube/bin
	I1217 20:03:06.320788  670841 out.go:368] Setting JSON to false
	I1217 20:03:06.322201  670841 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":6337,"bootTime":1765995449,"procs":340,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1217 20:03:06.322262  670841 start.go:143] virtualization: kvm guest
	I1217 20:03:06.324189  670841 out.go:179] * [calico-601560] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1217 20:03:06.325762  670841 out.go:179]   - MINIKUBE_LOCATION=22186
	I1217 20:03:06.325831  670841 notify.go:221] Checking for updates...
	I1217 20:03:06.328831  670841 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 20:03:06.330214  670841 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22186-372245/kubeconfig
	I1217 20:03:06.331788  670841 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22186-372245/.minikube
	I1217 20:03:06.333463  670841 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1217 20:03:06.334623  670841 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 20:03:06.336565  670841 config.go:182] Loaded profile config "auto-601560": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 20:03:06.336723  670841 config.go:182] Loaded profile config "embed-certs-147021": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 20:03:06.336847  670841 config.go:182] Loaded profile config "kindnet-601560": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 20:03:06.336979  670841 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 20:03:06.364886  670841 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1217 20:03:06.365000  670841 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 20:03:06.424631  670841 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:76 SystemTime:2025-12-17 20:03:06.413727225 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1217 20:03:06.424783  670841 docker.go:319] overlay module found
	I1217 20:03:06.427426  670841 out.go:179] * Using the docker driver based on user configuration
	I1217 20:03:06.428689  670841 start.go:309] selected driver: docker
	I1217 20:03:06.428712  670841 start.go:927] validating driver "docker" against <nil>
	I1217 20:03:06.428728  670841 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 20:03:06.429505  670841 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 20:03:06.493095  670841 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:76 SystemTime:2025-12-17 20:03:06.482003005 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1217 20:03:06.493274  670841 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1217 20:03:06.493508  670841 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1217 20:03:06.495120  670841 out.go:179] * Using Docker driver with root privileges
	I1217 20:03:06.496172  670841 cni.go:84] Creating CNI manager for "calico"
	I1217 20:03:06.496193  670841 start_flags.go:336] Found "Calico" CNI - setting NetworkPlugin=cni
	I1217 20:03:06.496285  670841 start.go:353] cluster config:
	{Name:calico-601560 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:calico-601560 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:
0 GPUs: AutoPauseInterval:1m0s}
	I1217 20:03:06.497678  670841 out.go:179] * Starting "calico-601560" primary control-plane node in "calico-601560" cluster
	I1217 20:03:06.498794  670841 cache.go:134] Beginning downloading kic base image for docker with crio
	I1217 20:03:06.499971  670841 out.go:179] * Pulling base image v0.0.48-1765966054-22186 ...
	I1217 20:03:06.501023  670841 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1217 20:03:06.501056  670841 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22186-372245/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4
	I1217 20:03:06.501066  670841 cache.go:65] Caching tarball of preloaded images
	I1217 20:03:06.501120  670841 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 in local docker daemon
	I1217 20:03:06.501214  670841 preload.go:238] Found /home/jenkins/minikube-integration/22186-372245/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1217 20:03:06.501231  670841 cache.go:68] Finished verifying existence of preloaded tar for v1.34.3 on crio
	I1217 20:03:06.501327  670841 profile.go:143] Saving config to /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/calico-601560/config.json ...
	I1217 20:03:06.501352  670841 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/calico-601560/config.json: {Name:mk6be0b9c208b74fea01fd07612f22127d8f64c7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 20:03:06.522048  670841 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 in local docker daemon, skipping pull
	I1217 20:03:06.522070  670841 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 exists in daemon, skipping load
	I1217 20:03:06.522115  670841 cache.go:243] Successfully downloaded all kic artifacts
	I1217 20:03:06.522175  670841 start.go:360] acquireMachinesLock for calico-601560: {Name:mke3872ce2d2a14c829289822bac63089cff205d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 20:03:06.522289  670841 start.go:364] duration metric: took 90.527µs to acquireMachinesLock for "calico-601560"
	I1217 20:03:06.522323  670841 start.go:93] Provisioning new machine with config: &{Name:calico-601560 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:calico-601560 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwar
ePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1217 20:03:06.522402  670841 start.go:125] createHost starting for "" (driver="docker")
	I1217 20:03:04.651635  661899 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1217 20:03:04.656398  661899 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.3/kubectl ...
	I1217 20:03:04.656420  661899 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2620 bytes)
	I1217 20:03:04.680303  661899 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1217 20:03:05.107294  661899 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1217 20:03:05.107396  661899 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 20:03:05.107429  661899 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes kindnet-601560 minikube.k8s.io/updated_at=2025_12_17T20_03_05_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=2e96f676eb7e96389e85fe0658a4ede4c4ba6924 minikube.k8s.io/name=kindnet-601560 minikube.k8s.io/primary=true
	I1217 20:03:05.118956  661899 ops.go:34] apiserver oom_adj: -16
	I1217 20:03:05.196882  661899 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 20:03:05.697507  661899 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 20:03:06.197676  661899 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 20:03:06.697717  661899 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 20:03:07.197033  661899 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 20:03:07.697552  661899 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 20:03:08.197878  661899 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	W1217 20:03:05.959230  663785 pod_ready.go:104] pod "coredns-66bc5c9577-wkvhv" is not "Ready", error: <nil>
	W1217 20:03:07.959482  663785 pod_ready.go:104] pod "coredns-66bc5c9577-wkvhv" is not "Ready", error: <nil>
	I1217 20:03:08.697649  661899 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 20:03:08.785919  661899 kubeadm.go:1114] duration metric: took 3.678598199s to wait for elevateKubeSystemPrivileges
	I1217 20:03:08.785983  661899 kubeadm.go:403] duration metric: took 17.944375799s to StartCluster
	I1217 20:03:08.786011  661899 settings.go:142] acquiring lock: {Name:mk01c60672ff2b8f50b037d6096a0a4590636830 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 20:03:08.786130  661899 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22186-372245/kubeconfig
	I1217 20:03:08.788373  661899 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-372245/kubeconfig: {Name:mkbe8926b9014d2af611aee93b1188b72880b6c1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 20:03:08.788664  661899 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1217 20:03:08.788666  661899 start.go:236] Will wait 15m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1217 20:03:08.788753  661899 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1217 20:03:08.788864  661899 config.go:182] Loaded profile config "kindnet-601560": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 20:03:08.788869  661899 addons.go:70] Setting storage-provisioner=true in profile "kindnet-601560"
	I1217 20:03:08.788893  661899 addons.go:239] Setting addon storage-provisioner=true in "kindnet-601560"
	I1217 20:03:08.788894  661899 addons.go:70] Setting default-storageclass=true in profile "kindnet-601560"
	I1217 20:03:08.788928  661899 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "kindnet-601560"
	I1217 20:03:08.788932  661899 host.go:66] Checking if "kindnet-601560" exists ...
	I1217 20:03:08.789357  661899 cli_runner.go:164] Run: docker container inspect kindnet-601560 --format={{.State.Status}}
	I1217 20:03:08.789569  661899 cli_runner.go:164] Run: docker container inspect kindnet-601560 --format={{.State.Status}}
	I1217 20:03:08.790648  661899 out.go:179] * Verifying Kubernetes components...
	I1217 20:03:08.795675  661899 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 20:03:08.821268  661899 addons.go:239] Setting addon default-storageclass=true in "kindnet-601560"
	I1217 20:03:08.821323  661899 host.go:66] Checking if "kindnet-601560" exists ...
	I1217 20:03:08.821828  661899 cli_runner.go:164] Run: docker container inspect kindnet-601560 --format={{.State.Status}}
	I1217 20:03:08.824049  661899 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	W1217 20:03:04.628802  660659 node_ready.go:57] node "auto-601560" has "Ready":"False" status (will retry)
	W1217 20:03:06.629486  660659 node_ready.go:57] node "auto-601560" has "Ready":"False" status (will retry)
	I1217 20:03:08.825880  661899 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 20:03:08.825913  661899 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1217 20:03:08.825975  661899 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-601560
	I1217 20:03:08.859514  661899 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1217 20:03:08.859542  661899 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1217 20:03:08.859610  661899 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-601560
	I1217 20:03:08.862345  661899 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33483 SSHKeyPath:/home/jenkins/minikube-integration/22186-372245/.minikube/machines/kindnet-601560/id_rsa Username:docker}
	I1217 20:03:08.904999  661899 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33483 SSHKeyPath:/home/jenkins/minikube-integration/22186-372245/.minikube/machines/kindnet-601560/id_rsa Username:docker}
	I1217 20:03:08.936633  661899 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.103.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1217 20:03:08.984519  661899 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 20:03:09.004738  661899 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 20:03:09.042429  661899 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1217 20:03:09.128183  661899 start.go:977] {"host.minikube.internal": 192.168.103.1} host record injected into CoreDNS's ConfigMap
	I1217 20:03:09.129752  661899 node_ready.go:35] waiting up to 15m0s for node "kindnet-601560" to be "Ready" ...
	I1217 20:03:09.375619  661899 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1217 20:03:06.524213  670841 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1217 20:03:06.524455  670841 start.go:159] libmachine.API.Create for "calico-601560" (driver="docker")
	I1217 20:03:06.524493  670841 client.go:173] LocalClient.Create starting
	I1217 20:03:06.524585  670841 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22186-372245/.minikube/certs/ca.pem
	I1217 20:03:06.524627  670841 main.go:143] libmachine: Decoding PEM data...
	I1217 20:03:06.524644  670841 main.go:143] libmachine: Parsing certificate...
	I1217 20:03:06.524695  670841 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22186-372245/.minikube/certs/cert.pem
	I1217 20:03:06.524713  670841 main.go:143] libmachine: Decoding PEM data...
	I1217 20:03:06.524723  670841 main.go:143] libmachine: Parsing certificate...
	I1217 20:03:06.525190  670841 cli_runner.go:164] Run: docker network inspect calico-601560 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1217 20:03:06.543887  670841 cli_runner.go:211] docker network inspect calico-601560 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1217 20:03:06.543968  670841 network_create.go:284] running [docker network inspect calico-601560] to gather additional debugging logs...
	I1217 20:03:06.543987  670841 cli_runner.go:164] Run: docker network inspect calico-601560
	W1217 20:03:06.562251  670841 cli_runner.go:211] docker network inspect calico-601560 returned with exit code 1
	I1217 20:03:06.562285  670841 network_create.go:287] error running [docker network inspect calico-601560]: docker network inspect calico-601560: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network calico-601560 not found
	I1217 20:03:06.562301  670841 network_create.go:289] output of [docker network inspect calico-601560]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network calico-601560 not found
	
	** /stderr **
	I1217 20:03:06.562456  670841 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1217 20:03:06.582566  670841 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-f64340259533 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:f6:0a:32:70:0d:35} reservation:<nil>}
	I1217 20:03:06.583606  670841 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-67abe6566c60 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:42:82:43:08:7c:e3} reservation:<nil>}
	I1217 20:03:06.584154  670841 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-f76d03f2ebfd IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:8e:bb:9b:fb:af:46} reservation:<nil>}
	I1217 20:03:06.584791  670841 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-e9e9e3776c58 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:72:28:1b:8b:8b:04} reservation:<nil>}
	I1217 20:03:06.585350  670841 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-d0eb8a5e2863 IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:4a:9f:ed:c7:db:49} reservation:<nil>}
	I1217 20:03:06.586190  670841 network.go:206] using free private subnet 192.168.94.0/24: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001eb0850}
	I1217 20:03:06.586214  670841 network_create.go:124] attempt to create docker network calico-601560 192.168.94.0/24 with gateway 192.168.94.1 and MTU of 1500 ...
	I1217 20:03:06.586281  670841 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=calico-601560 calico-601560
	I1217 20:03:06.640521  670841 network_create.go:108] docker network calico-601560 192.168.94.0/24 created
	I1217 20:03:06.640553  670841 kic.go:121] calculated static IP "192.168.94.2" for the "calico-601560" container
	I1217 20:03:06.640711  670841 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1217 20:03:06.659789  670841 cli_runner.go:164] Run: docker volume create calico-601560 --label name.minikube.sigs.k8s.io=calico-601560 --label created_by.minikube.sigs.k8s.io=true
	I1217 20:03:06.679536  670841 oci.go:103] Successfully created a docker volume calico-601560
	I1217 20:03:06.679611  670841 cli_runner.go:164] Run: docker run --rm --name calico-601560-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-601560 --entrypoint /usr/bin/test -v calico-601560:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 -d /var/lib
	I1217 20:03:07.115152  670841 oci.go:107] Successfully prepared a docker volume calico-601560
	I1217 20:03:07.115226  670841 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1217 20:03:07.115238  670841 kic.go:194] Starting extracting preloaded images to volume ...
	I1217 20:03:07.115294  670841 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22186-372245/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v calico-601560:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 -I lz4 -xf /preloaded.tar -C /extractDir
	I1217 20:03:09.376928  661899 addons.go:530] duration metric: took 588.17152ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1217 20:03:09.720426  661899 kapi.go:214] "coredns" deployment in "kube-system" namespace and "kindnet-601560" context rescaled to 1 replicas
	W1217 20:03:11.133740  661899 node_ready.go:57] node "kindnet-601560" has "Ready":"False" status (will retry)
	W1217 20:03:09.959713  663785 pod_ready.go:104] pod "coredns-66bc5c9577-wkvhv" is not "Ready", error: <nil>
	W1217 20:03:11.960862  663785 pod_ready.go:104] pod "coredns-66bc5c9577-wkvhv" is not "Ready", error: <nil>
	W1217 20:03:09.129477  660659 node_ready.go:57] node "auto-601560" has "Ready":"False" status (will retry)
	W1217 20:03:11.628957  660659 node_ready.go:57] node "auto-601560" has "Ready":"False" status (will retry)
	I1217 20:03:12.628696  660659 node_ready.go:49] node "auto-601560" is "Ready"
	I1217 20:03:12.628736  660659 node_ready.go:38] duration metric: took 12.503409226s for node "auto-601560" to be "Ready" ...
	I1217 20:03:12.628766  660659 api_server.go:52] waiting for apiserver process to appear ...
	I1217 20:03:12.628833  660659 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:03:12.643155  660659 api_server.go:72] duration metric: took 13.096462901s to wait for apiserver process to appear ...
	I1217 20:03:12.643198  660659 api_server.go:88] waiting for apiserver healthz status ...
	I1217 20:03:12.643223  660659 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1217 20:03:12.647637  660659 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1217 20:03:12.648699  660659 api_server.go:141] control plane version: v1.34.3
	I1217 20:03:12.648728  660659 api_server.go:131] duration metric: took 5.52285ms to wait for apiserver health ...
	I1217 20:03:12.648738  660659 system_pods.go:43] waiting for kube-system pods to appear ...
	I1217 20:03:12.653240  660659 system_pods.go:59] 8 kube-system pods found
	I1217 20:03:12.653279  660659 system_pods.go:61] "coredns-66bc5c9577-29z8k" [33275a41-d1d0-4ff0-b13e-0a61665252d0] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 20:03:12.653289  660659 system_pods.go:61] "etcd-auto-601560" [84ea5660-1c17-4d7d-ace7-4efc651b3b68] Running
	I1217 20:03:12.653298  660659 system_pods.go:61] "kindnet-pzcj6" [213fbf0a-12ab-4696-b808-4ca39c6913e6] Running
	I1217 20:03:12.653303  660659 system_pods.go:61] "kube-apiserver-auto-601560" [e539032b-c213-4d66-bc82-a86e8dd6e1cc] Running
	I1217 20:03:12.653309  660659 system_pods.go:61] "kube-controller-manager-auto-601560" [b977df20-d3c0-4ab0-9bc9-7b383e9eea6e] Running
	I1217 20:03:12.653317  660659 system_pods.go:61] "kube-proxy-6tvf2" [83944b8f-6c28-4789-a976-d78ac2a920e4] Running
	I1217 20:03:12.653322  660659 system_pods.go:61] "kube-scheduler-auto-601560" [cc30b861-c249-4802-a69c-0b5ecc90e94f] Running
	I1217 20:03:12.653330  660659 system_pods.go:61] "storage-provisioner" [f8c900c6-d970-49fa-9093-805bafea97d0] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1217 20:03:12.653341  660659 system_pods.go:74] duration metric: took 4.595246ms to wait for pod list to return data ...
	I1217 20:03:12.653352  660659 default_sa.go:34] waiting for default service account to be created ...
	I1217 20:03:12.655887  660659 default_sa.go:45] found service account: "default"
	I1217 20:03:12.655911  660659 default_sa.go:55] duration metric: took 2.54856ms for default service account to be created ...
	I1217 20:03:12.655923  660659 system_pods.go:116] waiting for k8s-apps to be running ...
	I1217 20:03:12.659065  660659 system_pods.go:86] 8 kube-system pods found
	I1217 20:03:12.659137  660659 system_pods.go:89] "coredns-66bc5c9577-29z8k" [33275a41-d1d0-4ff0-b13e-0a61665252d0] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 20:03:12.659146  660659 system_pods.go:89] "etcd-auto-601560" [84ea5660-1c17-4d7d-ace7-4efc651b3b68] Running
	I1217 20:03:12.659155  660659 system_pods.go:89] "kindnet-pzcj6" [213fbf0a-12ab-4696-b808-4ca39c6913e6] Running
	I1217 20:03:12.659160  660659 system_pods.go:89] "kube-apiserver-auto-601560" [e539032b-c213-4d66-bc82-a86e8dd6e1cc] Running
	I1217 20:03:12.659175  660659 system_pods.go:89] "kube-controller-manager-auto-601560" [b977df20-d3c0-4ab0-9bc9-7b383e9eea6e] Running
	I1217 20:03:12.659183  660659 system_pods.go:89] "kube-proxy-6tvf2" [83944b8f-6c28-4789-a976-d78ac2a920e4] Running
	I1217 20:03:12.659187  660659 system_pods.go:89] "kube-scheduler-auto-601560" [cc30b861-c249-4802-a69c-0b5ecc90e94f] Running
	I1217 20:03:12.659195  660659 system_pods.go:89] "storage-provisioner" [f8c900c6-d970-49fa-9093-805bafea97d0] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1217 20:03:12.659226  660659 retry.go:31] will retry after 262.559806ms: missing components: kube-dns
	I1217 20:03:12.926542  660659 system_pods.go:86] 8 kube-system pods found
	I1217 20:03:12.926578  660659 system_pods.go:89] "coredns-66bc5c9577-29z8k" [33275a41-d1d0-4ff0-b13e-0a61665252d0] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 20:03:12.926585  660659 system_pods.go:89] "etcd-auto-601560" [84ea5660-1c17-4d7d-ace7-4efc651b3b68] Running
	I1217 20:03:12.926592  660659 system_pods.go:89] "kindnet-pzcj6" [213fbf0a-12ab-4696-b808-4ca39c6913e6] Running
	I1217 20:03:12.926596  660659 system_pods.go:89] "kube-apiserver-auto-601560" [e539032b-c213-4d66-bc82-a86e8dd6e1cc] Running
	I1217 20:03:12.926600  660659 system_pods.go:89] "kube-controller-manager-auto-601560" [b977df20-d3c0-4ab0-9bc9-7b383e9eea6e] Running
	I1217 20:03:12.926604  660659 system_pods.go:89] "kube-proxy-6tvf2" [83944b8f-6c28-4789-a976-d78ac2a920e4] Running
	I1217 20:03:12.926607  660659 system_pods.go:89] "kube-scheduler-auto-601560" [cc30b861-c249-4802-a69c-0b5ecc90e94f] Running
	I1217 20:03:12.926618  660659 system_pods.go:89] "storage-provisioner" [f8c900c6-d970-49fa-9093-805bafea97d0] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1217 20:03:12.926647  660659 retry.go:31] will retry after 389.337916ms: missing components: kube-dns
	I1217 20:03:13.319939  660659 system_pods.go:86] 8 kube-system pods found
	I1217 20:03:13.319972  660659 system_pods.go:89] "coredns-66bc5c9577-29z8k" [33275a41-d1d0-4ff0-b13e-0a61665252d0] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 20:03:13.319979  660659 system_pods.go:89] "etcd-auto-601560" [84ea5660-1c17-4d7d-ace7-4efc651b3b68] Running
	I1217 20:03:13.319986  660659 system_pods.go:89] "kindnet-pzcj6" [213fbf0a-12ab-4696-b808-4ca39c6913e6] Running
	I1217 20:03:13.319991  660659 system_pods.go:89] "kube-apiserver-auto-601560" [e539032b-c213-4d66-bc82-a86e8dd6e1cc] Running
	I1217 20:03:13.319995  660659 system_pods.go:89] "kube-controller-manager-auto-601560" [b977df20-d3c0-4ab0-9bc9-7b383e9eea6e] Running
	I1217 20:03:13.319998  660659 system_pods.go:89] "kube-proxy-6tvf2" [83944b8f-6c28-4789-a976-d78ac2a920e4] Running
	I1217 20:03:13.320001  660659 system_pods.go:89] "kube-scheduler-auto-601560" [cc30b861-c249-4802-a69c-0b5ecc90e94f] Running
	I1217 20:03:13.320006  660659 system_pods.go:89] "storage-provisioner" [f8c900c6-d970-49fa-9093-805bafea97d0] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1217 20:03:13.320025  660659 retry.go:31] will retry after 343.958159ms: missing components: kube-dns
	I1217 20:03:13.668620  660659 system_pods.go:86] 8 kube-system pods found
	I1217 20:03:13.668649  660659 system_pods.go:89] "coredns-66bc5c9577-29z8k" [33275a41-d1d0-4ff0-b13e-0a61665252d0] Running
	I1217 20:03:13.668655  660659 system_pods.go:89] "etcd-auto-601560" [84ea5660-1c17-4d7d-ace7-4efc651b3b68] Running
	I1217 20:03:13.668659  660659 system_pods.go:89] "kindnet-pzcj6" [213fbf0a-12ab-4696-b808-4ca39c6913e6] Running
	I1217 20:03:13.668662  660659 system_pods.go:89] "kube-apiserver-auto-601560" [e539032b-c213-4d66-bc82-a86e8dd6e1cc] Running
	I1217 20:03:13.668668  660659 system_pods.go:89] "kube-controller-manager-auto-601560" [b977df20-d3c0-4ab0-9bc9-7b383e9eea6e] Running
	I1217 20:03:13.668672  660659 system_pods.go:89] "kube-proxy-6tvf2" [83944b8f-6c28-4789-a976-d78ac2a920e4] Running
	I1217 20:03:13.668675  660659 system_pods.go:89] "kube-scheduler-auto-601560" [cc30b861-c249-4802-a69c-0b5ecc90e94f] Running
	I1217 20:03:13.668678  660659 system_pods.go:89] "storage-provisioner" [f8c900c6-d970-49fa-9093-805bafea97d0] Running
	I1217 20:03:13.668686  660659 system_pods.go:126] duration metric: took 1.012755801s to wait for k8s-apps to be running ...
	I1217 20:03:13.668696  660659 system_svc.go:44] waiting for kubelet service to be running ....
	I1217 20:03:13.668742  660659 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 20:03:13.681722  660659 system_svc.go:56] duration metric: took 13.011537ms WaitForService to wait for kubelet
	I1217 20:03:13.681757  660659 kubeadm.go:587] duration metric: took 14.135072375s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1217 20:03:13.681783  660659 node_conditions.go:102] verifying NodePressure condition ...
	I1217 20:03:13.685089  660659 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1217 20:03:13.685119  660659 node_conditions.go:123] node cpu capacity is 8
	I1217 20:03:13.685139  660659 node_conditions.go:105] duration metric: took 3.349536ms to run NodePressure ...
	I1217 20:03:13.685155  660659 start.go:242] waiting for startup goroutines ...
	I1217 20:03:13.685164  660659 start.go:247] waiting for cluster config update ...
	I1217 20:03:13.685188  660659 start.go:256] writing updated cluster config ...
	I1217 20:03:13.685486  660659 ssh_runner.go:195] Run: rm -f paused
	I1217 20:03:13.689795  660659 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1217 20:03:13.693579  660659 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-29z8k" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:03:13.698310  660659 pod_ready.go:94] pod "coredns-66bc5c9577-29z8k" is "Ready"
	I1217 20:03:13.698341  660659 pod_ready.go:86] duration metric: took 4.737405ms for pod "coredns-66bc5c9577-29z8k" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:03:13.700751  660659 pod_ready.go:83] waiting for pod "etcd-auto-601560" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:03:13.705761  660659 pod_ready.go:94] pod "etcd-auto-601560" is "Ready"
	I1217 20:03:13.705787  660659 pod_ready.go:86] duration metric: took 5.009682ms for pod "etcd-auto-601560" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:03:13.707955  660659 pod_ready.go:83] waiting for pod "kube-apiserver-auto-601560" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:03:13.711879  660659 pod_ready.go:94] pod "kube-apiserver-auto-601560" is "Ready"
	I1217 20:03:13.711898  660659 pod_ready.go:86] duration metric: took 3.91844ms for pod "kube-apiserver-auto-601560" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:03:13.714007  660659 pod_ready.go:83] waiting for pod "kube-controller-manager-auto-601560" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:03:14.094423  660659 pod_ready.go:94] pod "kube-controller-manager-auto-601560" is "Ready"
	I1217 20:03:14.094453  660659 pod_ready.go:86] duration metric: took 380.4236ms for pod "kube-controller-manager-auto-601560" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:03:14.294921  660659 pod_ready.go:83] waiting for pod "kube-proxy-6tvf2" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:03:14.694779  660659 pod_ready.go:94] pod "kube-proxy-6tvf2" is "Ready"
	I1217 20:03:14.694808  660659 pod_ready.go:86] duration metric: took 399.855347ms for pod "kube-proxy-6tvf2" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:03:14.894375  660659 pod_ready.go:83] waiting for pod "kube-scheduler-auto-601560" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:03:15.293849  660659 pod_ready.go:94] pod "kube-scheduler-auto-601560" is "Ready"
	I1217 20:03:15.293878  660659 pod_ready.go:86] duration metric: took 399.470896ms for pod "kube-scheduler-auto-601560" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:03:15.293890  660659 pod_ready.go:40] duration metric: took 1.604061758s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1217 20:03:15.342087  660659 start.go:625] kubectl: 1.35.0, cluster: 1.34.3 (minor skew: 1)
	I1217 20:03:15.344166  660659 out.go:179] * Done! kubectl is now configured to use "auto-601560" cluster and "default" namespace by default
	I1217 20:03:11.349010  670841 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22186-372245/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v calico-601560:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 -I lz4 -xf /preloaded.tar -C /extractDir: (4.233672828s)
	I1217 20:03:11.349045  670841 kic.go:203] duration metric: took 4.233803032s to extract preloaded images to volume ...
	W1217 20:03:11.349172  670841 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1217 20:03:11.349206  670841 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1217 20:03:11.349268  670841 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1217 20:03:11.416488  670841 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname calico-601560 --name calico-601560 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-601560 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=calico-601560 --network calico-601560 --ip 192.168.94.2 --volume calico-601560:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0
	I1217 20:03:11.743903  670841 cli_runner.go:164] Run: docker container inspect calico-601560 --format={{.State.Running}}
	I1217 20:03:11.765532  670841 cli_runner.go:164] Run: docker container inspect calico-601560 --format={{.State.Status}}
	I1217 20:03:11.788594  670841 cli_runner.go:164] Run: docker exec calico-601560 stat /var/lib/dpkg/alternatives/iptables
	I1217 20:03:11.844141  670841 oci.go:144] the created container "calico-601560" has a running status.
	I1217 20:03:11.844177  670841 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22186-372245/.minikube/machines/calico-601560/id_rsa...
	I1217 20:03:11.921813  670841 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22186-372245/.minikube/machines/calico-601560/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1217 20:03:11.952850  670841 cli_runner.go:164] Run: docker container inspect calico-601560 --format={{.State.Status}}
	I1217 20:03:11.977649  670841 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1217 20:03:11.977679  670841 kic_runner.go:114] Args: [docker exec --privileged calico-601560 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1217 20:03:12.043236  670841 cli_runner.go:164] Run: docker container inspect calico-601560 --format={{.State.Status}}
	I1217 20:03:12.068255  670841 machine.go:94] provisionDockerMachine start ...
	I1217 20:03:12.068380  670841 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-601560
	I1217 20:03:12.099514  670841 main.go:143] libmachine: Using SSH client type: native
	I1217 20:03:12.099950  670841 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33493 <nil> <nil>}
	I1217 20:03:12.099981  670841 main.go:143] libmachine: About to run SSH command:
	hostname
	I1217 20:03:12.100849  670841 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:35966->127.0.0.1:33493: read: connection reset by peer
	I1217 20:03:15.249372  670841 main.go:143] libmachine: SSH cmd err, output: <nil>: calico-601560
	
	I1217 20:03:15.249406  670841 ubuntu.go:182] provisioning hostname "calico-601560"
	I1217 20:03:15.249471  670841 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-601560
	I1217 20:03:15.268393  670841 main.go:143] libmachine: Using SSH client type: native
	I1217 20:03:15.268658  670841 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33493 <nil> <nil>}
	I1217 20:03:15.268673  670841 main.go:143] libmachine: About to run SSH command:
	sudo hostname calico-601560 && echo "calico-601560" | sudo tee /etc/hostname
	I1217 20:03:15.431953  670841 main.go:143] libmachine: SSH cmd err, output: <nil>: calico-601560
	
	I1217 20:03:15.432028  670841 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-601560
	I1217 20:03:15.453345  670841 main.go:143] libmachine: Using SSH client type: native
	I1217 20:03:15.453599  670841 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33493 <nil> <nil>}
	I1217 20:03:15.453623  670841 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scalico-601560' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 calico-601560/g' /etc/hosts;
				else 
					echo '127.0.1.1 calico-601560' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1217 20:03:15.606909  670841 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1217 20:03:15.606954  670841 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22186-372245/.minikube CaCertPath:/home/jenkins/minikube-integration/22186-372245/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22186-372245/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22186-372245/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22186-372245/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22186-372245/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22186-372245/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22186-372245/.minikube}
	I1217 20:03:15.606981  670841 ubuntu.go:190] setting up certificates
	I1217 20:03:15.607003  670841 provision.go:84] configureAuth start
	I1217 20:03:15.607070  670841 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-601560
	I1217 20:03:15.625605  670841 provision.go:143] copyHostCerts
	I1217 20:03:15.625682  670841 exec_runner.go:144] found /home/jenkins/minikube-integration/22186-372245/.minikube/ca.pem, removing ...
	I1217 20:03:15.625697  670841 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22186-372245/.minikube/ca.pem
	I1217 20:03:15.625771  670841 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22186-372245/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22186-372245/.minikube/ca.pem (1082 bytes)
	I1217 20:03:15.625883  670841 exec_runner.go:144] found /home/jenkins/minikube-integration/22186-372245/.minikube/cert.pem, removing ...
	I1217 20:03:15.625892  670841 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22186-372245/.minikube/cert.pem
	I1217 20:03:15.625921  670841 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22186-372245/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22186-372245/.minikube/cert.pem (1123 bytes)
	I1217 20:03:15.626010  670841 exec_runner.go:144] found /home/jenkins/minikube-integration/22186-372245/.minikube/key.pem, removing ...
	I1217 20:03:15.626018  670841 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22186-372245/.minikube/key.pem
	I1217 20:03:15.626044  670841 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22186-372245/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22186-372245/.minikube/key.pem (1675 bytes)
	I1217 20:03:15.626144  670841 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22186-372245/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22186-372245/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22186-372245/.minikube/certs/ca-key.pem org=jenkins.calico-601560 san=[127.0.0.1 192.168.94.2 calico-601560 localhost minikube]
	I1217 20:03:15.762350  670841 provision.go:177] copyRemoteCerts
	I1217 20:03:15.762417  670841 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1217 20:03:15.762468  670841 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-601560
	I1217 20:03:15.782377  670841 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33493 SSHKeyPath:/home/jenkins/minikube-integration/22186-372245/.minikube/machines/calico-601560/id_rsa Username:docker}
	I1217 20:03:15.892508  670841 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1217 20:03:15.914325  670841 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1217 20:03:15.932134  670841 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1217 20:03:15.950593  670841 provision.go:87] duration metric: took 343.569515ms to configureAuth
	I1217 20:03:15.950626  670841 ubuntu.go:206] setting minikube options for container-runtime
	I1217 20:03:15.950844  670841 config.go:182] Loaded profile config "calico-601560": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 20:03:15.951019  670841 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-601560
	I1217 20:03:15.970726  670841 main.go:143] libmachine: Using SSH client type: native
	I1217 20:03:15.970969  670841 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33493 <nil> <nil>}
	I1217 20:03:15.970989  670841 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1217 20:03:16.273601  670841 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1217 20:03:16.273650  670841 machine.go:97] duration metric: took 4.205368825s to provisionDockerMachine
	I1217 20:03:16.273663  670841 client.go:176] duration metric: took 9.749161975s to LocalClient.Create
	I1217 20:03:16.273684  670841 start.go:167] duration metric: took 9.749229635s to libmachine.API.Create "calico-601560"
	I1217 20:03:16.273694  670841 start.go:293] postStartSetup for "calico-601560" (driver="docker")
	I1217 20:03:16.273713  670841 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1217 20:03:16.273797  670841 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1217 20:03:16.273849  670841 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-601560
	I1217 20:03:16.293164  670841 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33493 SSHKeyPath:/home/jenkins/minikube-integration/22186-372245/.minikube/machines/calico-601560/id_rsa Username:docker}
	I1217 20:03:16.405690  670841 ssh_runner.go:195] Run: cat /etc/os-release
	I1217 20:03:16.411884  670841 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1217 20:03:16.411924  670841 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1217 20:03:16.411939  670841 filesync.go:126] Scanning /home/jenkins/minikube-integration/22186-372245/.minikube/addons for local assets ...
	I1217 20:03:16.411998  670841 filesync.go:126] Scanning /home/jenkins/minikube-integration/22186-372245/.minikube/files for local assets ...
	I1217 20:03:16.412091  670841 filesync.go:149] local asset: /home/jenkins/minikube-integration/22186-372245/.minikube/files/etc/ssl/certs/3757972.pem -> 3757972.pem in /etc/ssl/certs
	I1217 20:03:16.412255  670841 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1217 20:03:16.425334  670841 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/files/etc/ssl/certs/3757972.pem --> /etc/ssl/certs/3757972.pem (1708 bytes)
	I1217 20:03:16.457378  670841 start.go:296] duration metric: took 183.663577ms for postStartSetup
	I1217 20:03:16.458843  670841 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-601560
	I1217 20:03:16.488319  670841 profile.go:143] Saving config to /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/calico-601560/config.json ...
	I1217 20:03:16.488587  670841 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 20:03:16.488637  670841 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-601560
	I1217 20:03:16.516820  670841 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33493 SSHKeyPath:/home/jenkins/minikube-integration/22186-372245/.minikube/machines/calico-601560/id_rsa Username:docker}
	I1217 20:03:16.638115  670841 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1217 20:03:16.643746  670841 start.go:128] duration metric: took 10.121327214s to createHost
	I1217 20:03:16.643773  670841 start.go:83] releasing machines lock for "calico-601560", held for 10.121467733s
	I1217 20:03:16.643838  670841 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-601560
	I1217 20:03:16.668442  670841 ssh_runner.go:195] Run: cat /version.json
	I1217 20:03:16.668510  670841 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-601560
	I1217 20:03:16.668528  670841 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1217 20:03:16.668717  670841 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-601560
	I1217 20:03:16.695591  670841 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33493 SSHKeyPath:/home/jenkins/minikube-integration/22186-372245/.minikube/machines/calico-601560/id_rsa Username:docker}
	I1217 20:03:16.696443  670841 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33493 SSHKeyPath:/home/jenkins/minikube-integration/22186-372245/.minikube/machines/calico-601560/id_rsa Username:docker}
	I1217 20:03:16.896713  670841 ssh_runner.go:195] Run: systemctl --version
	I1217 20:03:16.905676  670841 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1217 20:03:16.958282  670841 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1217 20:03:16.964720  670841 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1217 20:03:16.964800  670841 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1217 20:03:16.997994  670841 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1217 20:03:16.998019  670841 start.go:496] detecting cgroup driver to use...
	I1217 20:03:16.998053  670841 detect.go:190] detected "systemd" cgroup driver on host os
	I1217 20:03:16.998114  670841 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1217 20:03:17.018338  670841 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1217 20:03:17.035675  670841 docker.go:218] disabling cri-docker service (if available) ...
	I1217 20:03:17.035750  670841 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1217 20:03:17.058121  670841 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1217 20:03:17.083226  670841 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1217 20:03:17.200624  670841 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1217 20:03:17.313524  670841 docker.go:234] disabling docker service ...
	I1217 20:03:17.313613  670841 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1217 20:03:17.335716  670841 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1217 20:03:17.351352  670841 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1217 20:03:17.465295  670841 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1217 20:03:17.589889  670841 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1217 20:03:17.606031  670841 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1217 20:03:17.621660  670841 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1217 20:03:17.621915  670841 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:03:17.634404  670841 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1217 20:03:17.634486  670841 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:03:17.645731  670841 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:03:17.656201  670841 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:03:17.667047  670841 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1217 20:03:17.676894  670841 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:03:17.687247  670841 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:03:17.703651  670841 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:03:17.713807  670841 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1217 20:03:17.723176  670841 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1217 20:03:17.731324  670841 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 20:03:17.821245  670841 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1217 20:03:18.261378  670841 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1217 20:03:18.261483  670841 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1217 20:03:18.265978  670841 start.go:564] Will wait 60s for crictl version
	I1217 20:03:18.266037  670841 ssh_runner.go:195] Run: which crictl
	I1217 20:03:18.269929  670841 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1217 20:03:18.296064  670841 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1217 20:03:18.296202  670841 ssh_runner.go:195] Run: crio --version
	I1217 20:03:18.326516  670841 ssh_runner.go:195] Run: crio --version
	I1217 20:03:18.357661  670841 out.go:179] * Preparing Kubernetes v1.34.3 on CRI-O 1.34.3 ...
	W1217 20:03:13.633954  661899 node_ready.go:57] node "kindnet-601560" has "Ready":"False" status (will retry)
	W1217 20:03:16.133355  661899 node_ready.go:57] node "kindnet-601560" has "Ready":"False" status (will retry)
	W1217 20:03:14.459986  663785 pod_ready.go:104] pod "coredns-66bc5c9577-wkvhv" is not "Ready", error: <nil>
	W1217 20:03:16.461902  663785 pod_ready.go:104] pod "coredns-66bc5c9577-wkvhv" is not "Ready", error: <nil>
	I1217 20:03:18.359057  670841 cli_runner.go:164] Run: docker network inspect calico-601560 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1217 20:03:18.378394  670841 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1217 20:03:18.382759  670841 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 20:03:18.393903  670841 kubeadm.go:884] updating cluster {Name:calico-601560 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:calico-601560 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1217 20:03:18.394049  670841 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1217 20:03:18.394164  670841 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 20:03:18.429026  670841 crio.go:514] all images are preloaded for cri-o runtime.
	I1217 20:03:18.429049  670841 crio.go:433] Images already preloaded, skipping extraction
	I1217 20:03:18.429122  670841 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 20:03:18.457449  670841 crio.go:514] all images are preloaded for cri-o runtime.
	I1217 20:03:18.457476  670841 cache_images.go:86] Images are preloaded, skipping loading
	I1217 20:03:18.457488  670841 kubeadm.go:935] updating node { 192.168.94.2 8443 v1.34.3 crio true true} ...
	I1217 20:03:18.457616  670841 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=calico-601560 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.3 ClusterName:calico-601560 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico}
	I1217 20:03:18.457766  670841 ssh_runner.go:195] Run: crio config
	I1217 20:03:18.512976  670841 cni.go:84] Creating CNI manager for "calico"
	I1217 20:03:18.513005  670841 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1217 20:03:18.513030  670841 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.34.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:calico-601560 NodeName:calico-601560 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernet
es/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1217 20:03:18.513243  670841 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "calico-601560"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1217 20:03:18.513329  670841 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.3
	I1217 20:03:18.521923  670841 binaries.go:51] Found k8s binaries, skipping transfer
	I1217 20:03:18.521995  670841 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1217 20:03:18.530138  670841 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1217 20:03:18.543217  670841 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1217 20:03:18.560033  670841 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1217 20:03:18.574304  670841 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1217 20:03:18.578553  670841 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 20:03:18.589114  670841 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 20:03:18.673746  670841 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 20:03:18.698045  670841 certs.go:69] Setting up /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/calico-601560 for IP: 192.168.94.2
	I1217 20:03:18.698070  670841 certs.go:195] generating shared ca certs ...
	I1217 20:03:18.698115  670841 certs.go:227] acquiring lock for ca certs: {Name:mk6c0a4a99609de13fb0b54aca94f9165cc7856c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 20:03:18.698294  670841 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22186-372245/.minikube/ca.key
	I1217 20:03:18.698348  670841 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22186-372245/.minikube/proxy-client-ca.key
	I1217 20:03:18.698362  670841 certs.go:257] generating profile certs ...
	I1217 20:03:18.698430  670841 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/calico-601560/client.key
	I1217 20:03:18.698447  670841 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/calico-601560/client.crt with IP's: []
	I1217 20:03:18.767536  670841 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/calico-601560/client.crt ...
	I1217 20:03:18.767564  670841 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/calico-601560/client.crt: {Name:mk9432b36e502caa91a481e22b2148bdf1b5a0d4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 20:03:18.767747  670841 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/calico-601560/client.key ...
	I1217 20:03:18.767760  670841 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/calico-601560/client.key: {Name:mk0aa051b1af273ba8dda0ecd55fff85b70738d2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 20:03:18.767874  670841 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/calico-601560/apiserver.key.85c9d960
	I1217 20:03:18.767892  670841 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/calico-601560/apiserver.crt.85c9d960 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.94.2]
	I1217 20:03:18.836433  670841 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/calico-601560/apiserver.crt.85c9d960 ...
	I1217 20:03:18.836465  670841 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/calico-601560/apiserver.crt.85c9d960: {Name:mk7c226a215234bbd382f19004717f88577a0952 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 20:03:18.836672  670841 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/calico-601560/apiserver.key.85c9d960 ...
	I1217 20:03:18.836686  670841 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/calico-601560/apiserver.key.85c9d960: {Name:mk874ef5860b4367b6edcbb5951bd272e8d07ee5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 20:03:18.836769  670841 certs.go:382] copying /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/calico-601560/apiserver.crt.85c9d960 -> /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/calico-601560/apiserver.crt
	I1217 20:03:18.836869  670841 certs.go:386] copying /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/calico-601560/apiserver.key.85c9d960 -> /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/calico-601560/apiserver.key
	I1217 20:03:18.836933  670841 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/calico-601560/proxy-client.key
	I1217 20:03:18.836948  670841 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/calico-601560/proxy-client.crt with IP's: []
	I1217 20:03:19.015934  670841 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/calico-601560/proxy-client.crt ...
	I1217 20:03:19.015971  670841 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/calico-601560/proxy-client.crt: {Name:mkbda0282c222c3ee0f68a01da3f3a26249900be Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 20:03:19.016179  670841 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/calico-601560/proxy-client.key ...
	I1217 20:03:19.016202  670841 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/calico-601560/proxy-client.key: {Name:mk9b046a017a0c67c4a76c3103a96847209531f4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 20:03:19.016379  670841 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-372245/.minikube/certs/375797.pem (1338 bytes)
	W1217 20:03:19.016417  670841 certs.go:480] ignoring /home/jenkins/minikube-integration/22186-372245/.minikube/certs/375797_empty.pem, impossibly tiny 0 bytes
	I1217 20:03:19.016427  670841 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-372245/.minikube/certs/ca-key.pem (1675 bytes)
	I1217 20:03:19.016452  670841 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-372245/.minikube/certs/ca.pem (1082 bytes)
	I1217 20:03:19.016478  670841 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-372245/.minikube/certs/cert.pem (1123 bytes)
	I1217 20:03:19.016500  670841 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-372245/.minikube/certs/key.pem (1675 bytes)
	I1217 20:03:19.016537  670841 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-372245/.minikube/files/etc/ssl/certs/3757972.pem (1708 bytes)
	I1217 20:03:19.017148  670841 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1217 20:03:19.037606  670841 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1217 20:03:19.057014  670841 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1217 20:03:19.076665  670841 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1217 20:03:19.096520  670841 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/calico-601560/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1217 20:03:19.115530  670841 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/calico-601560/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1217 20:03:19.134976  670841 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/calico-601560/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1217 20:03:19.154282  670841 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/calico-601560/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1217 20:03:19.173406  670841 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 20:03:19.193624  670841 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/certs/375797.pem --> /usr/share/ca-certificates/375797.pem (1338 bytes)
	I1217 20:03:19.212968  670841 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/files/etc/ssl/certs/3757972.pem --> /usr/share/ca-certificates/3757972.pem (1708 bytes)
	I1217 20:03:19.231415  670841 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1217 20:03:19.245722  670841 ssh_runner.go:195] Run: openssl version
	I1217 20:03:19.252257  670841 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/375797.pem
	I1217 20:03:19.260388  670841 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/375797.pem /etc/ssl/certs/375797.pem
	I1217 20:03:19.268681  670841 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/375797.pem
	I1217 20:03:19.272908  670841 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 17 19:32 /usr/share/ca-certificates/375797.pem
	I1217 20:03:19.272982  670841 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/375797.pem
	I1217 20:03:19.308399  670841 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1217 20:03:19.317292  670841 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/375797.pem /etc/ssl/certs/51391683.0
	I1217 20:03:19.325519  670841 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3757972.pem
	I1217 20:03:19.333430  670841 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3757972.pem /etc/ssl/certs/3757972.pem
	I1217 20:03:19.341556  670841 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3757972.pem
	I1217 20:03:19.345735  670841 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 17 19:32 /usr/share/ca-certificates/3757972.pem
	I1217 20:03:19.345789  670841 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3757972.pem
	I1217 20:03:19.381717  670841 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1217 20:03:19.390332  670841 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/3757972.pem /etc/ssl/certs/3ec20f2e.0
	I1217 20:03:19.399606  670841 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:03:19.407353  670841 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1217 20:03:19.416130  670841 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:03:19.420861  670841 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 19:24 /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:03:19.420926  670841 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:03:19.460029  670841 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1217 20:03:19.468941  670841 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1217 20:03:19.478531  670841 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1217 20:03:19.482955  670841 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1217 20:03:19.483017  670841 kubeadm.go:401] StartCluster: {Name:calico-601560 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:calico-601560 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Soc
ketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 20:03:19.483150  670841 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1217 20:03:19.483219  670841 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 20:03:19.512353  670841 cri.go:89] found id: ""
	I1217 20:03:19.512430  670841 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1217 20:03:19.521235  670841 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1217 20:03:19.529915  670841 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1217 20:03:19.529978  670841 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1217 20:03:19.538113  670841 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1217 20:03:19.538136  670841 kubeadm.go:158] found existing configuration files:
	
	I1217 20:03:19.538192  670841 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1217 20:03:19.546158  670841 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1217 20:03:19.546227  670841 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1217 20:03:19.554339  670841 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1217 20:03:19.562696  670841 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1217 20:03:19.562756  670841 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1217 20:03:19.570697  670841 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1217 20:03:19.578950  670841 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1217 20:03:19.579017  670841 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1217 20:03:19.589583  670841 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1217 20:03:19.598955  670841 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1217 20:03:19.599007  670841 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1217 20:03:19.608025  670841 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1217 20:03:19.655407  670841 kubeadm.go:319] [init] Using Kubernetes version: v1.34.3
	I1217 20:03:19.655500  670841 kubeadm.go:319] [preflight] Running pre-flight checks
	I1217 20:03:19.677239  670841 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1217 20:03:19.677303  670841 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1045-gcp
	I1217 20:03:19.677335  670841 kubeadm.go:319] OS: Linux
	I1217 20:03:19.677386  670841 kubeadm.go:319] CGROUPS_CPU: enabled
	I1217 20:03:19.677444  670841 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1217 20:03:19.677569  670841 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1217 20:03:19.677632  670841 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1217 20:03:19.677674  670841 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1217 20:03:19.677715  670841 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1217 20:03:19.677773  670841 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1217 20:03:19.677816  670841 kubeadm.go:319] CGROUPS_IO: enabled
	I1217 20:03:19.741105  670841 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1217 20:03:19.741272  670841 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1217 20:03:19.741546  670841 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1217 20:03:19.750023  670841 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1217 20:03:19.753242  670841 out.go:252]   - Generating certificates and keys ...
	I1217 20:03:19.753357  670841 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1217 20:03:19.753456  670841 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1217 20:03:20.239110  670841 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1217 20:03:20.608276  670841 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1217 20:03:21.218915  670841 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1217 20:03:21.525425  670841 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1217 20:03:21.851767  670841 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1217 20:03:21.851961  670841 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [calico-601560 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1217 20:03:21.939356  670841 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1217 20:03:21.939473  670841 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [calico-601560 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1217 20:03:22.307543  670841 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1217 20:03:22.421761  670841 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1217 20:03:22.510564  670841 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1217 20:03:22.510665  670841 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1217 20:03:22.609012  670841 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1217 20:03:22.646737  670841 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1217 20:03:22.879851  670841 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1217 20:03:23.141326  670841 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1217 20:03:23.238009  670841 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1217 20:03:23.238550  670841 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1217 20:03:23.242668  670841 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	W1217 20:03:18.633906  661899 node_ready.go:57] node "kindnet-601560" has "Ready":"False" status (will retry)
	W1217 20:03:21.133478  661899 node_ready.go:57] node "kindnet-601560" has "Ready":"False" status (will retry)
	W1217 20:03:18.959478  663785 pod_ready.go:104] pod "coredns-66bc5c9577-wkvhv" is not "Ready", error: <nil>
	W1217 20:03:20.960520  663785 pod_ready.go:104] pod "coredns-66bc5c9577-wkvhv" is not "Ready", error: <nil>
	W1217 20:03:22.961318  663785 pod_ready.go:104] pod "coredns-66bc5c9577-wkvhv" is not "Ready", error: <nil>
	I1217 20:03:23.244170  670841 out.go:252]   - Booting up control plane ...
	I1217 20:03:23.244253  670841 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1217 20:03:23.244346  670841 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1217 20:03:23.245131  670841 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1217 20:03:23.259283  670841 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1217 20:03:23.259417  670841 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1217 20:03:23.266344  670841 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1217 20:03:23.266700  670841 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1217 20:03:23.266748  670841 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1217 20:03:23.373780  670841 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1217 20:03:23.373931  670841 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1217 20:03:23.875663  670841 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 502.022631ms
	I1217 20:03:23.879701  670841 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1217 20:03:23.879851  670841 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.94.2:8443/livez
	I1217 20:03:23.880003  670841 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1217 20:03:23.880154  670841 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1217 20:03:25.939785  670841 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.060056721s
	I1217 20:03:26.154205  670841 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.274445016s
	I1217 20:03:23.632909  661899 node_ready.go:49] node "kindnet-601560" is "Ready"
	I1217 20:03:23.632943  661899 node_ready.go:38] duration metric: took 14.503161667s for node "kindnet-601560" to be "Ready" ...
	I1217 20:03:23.632969  661899 api_server.go:52] waiting for apiserver process to appear ...
	I1217 20:03:23.633033  661899 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:03:23.646556  661899 api_server.go:72] duration metric: took 14.857848207s to wait for apiserver process to appear ...
	I1217 20:03:23.646585  661899 api_server.go:88] waiting for apiserver healthz status ...
	I1217 20:03:23.646607  661899 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1217 20:03:23.651302  661899 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1217 20:03:23.652378  661899 api_server.go:141] control plane version: v1.34.3
	I1217 20:03:23.652406  661899 api_server.go:131] duration metric: took 5.812904ms to wait for apiserver health ...
	I1217 20:03:23.652424  661899 system_pods.go:43] waiting for kube-system pods to appear ...
	I1217 20:03:23.656141  661899 system_pods.go:59] 8 kube-system pods found
	I1217 20:03:23.656172  661899 system_pods.go:61] "coredns-66bc5c9577-8jj68" [53709eea-dcb0-4d6f-a32e-32a9f2de468b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 20:03:23.656177  661899 system_pods.go:61] "etcd-kindnet-601560" [eacdb7f0-ce90-4edb-af21-889ecfc65870] Running
	I1217 20:03:23.656183  661899 system_pods.go:61] "kindnet-mfmbc" [ceb0146a-e10e-4b22-a499-9bf6b194b9ec] Running
	I1217 20:03:23.656187  661899 system_pods.go:61] "kube-apiserver-kindnet-601560" [d6c0fb58-d7a1-4f0a-b3ed-576f5b0ca96c] Running
	I1217 20:03:23.656197  661899 system_pods.go:61] "kube-controller-manager-kindnet-601560" [95a0b8e8-e4bf-4335-a52c-90694161fdad] Running
	I1217 20:03:23.656203  661899 system_pods.go:61] "kube-proxy-bskt5" [96940e08-5a37-4ac7-821a-fd8a448cc3df] Running
	I1217 20:03:23.656208  661899 system_pods.go:61] "kube-scheduler-kindnet-601560" [a11357f9-84d1-431c-9218-9f6845d307a7] Running
	I1217 20:03:23.656216  661899 system_pods.go:61] "storage-provisioner" [fc116921-19d7-4900-b5ff-8ab0fab65ffa] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1217 20:03:23.656232  661899 system_pods.go:74] duration metric: took 3.80108ms to wait for pod list to return data ...
	I1217 20:03:23.656253  661899 default_sa.go:34] waiting for default service account to be created ...
	I1217 20:03:23.658725  661899 default_sa.go:45] found service account: "default"
	I1217 20:03:23.658745  661899 default_sa.go:55] duration metric: took 2.483109ms for default service account to be created ...
	I1217 20:03:23.658771  661899 system_pods.go:116] waiting for k8s-apps to be running ...
	I1217 20:03:23.662172  661899 system_pods.go:86] 8 kube-system pods found
	I1217 20:03:23.662213  661899 system_pods.go:89] "coredns-66bc5c9577-8jj68" [53709eea-dcb0-4d6f-a32e-32a9f2de468b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 20:03:23.662220  661899 system_pods.go:89] "etcd-kindnet-601560" [eacdb7f0-ce90-4edb-af21-889ecfc65870] Running
	I1217 20:03:23.662228  661899 system_pods.go:89] "kindnet-mfmbc" [ceb0146a-e10e-4b22-a499-9bf6b194b9ec] Running
	I1217 20:03:23.662234  661899 system_pods.go:89] "kube-apiserver-kindnet-601560" [d6c0fb58-d7a1-4f0a-b3ed-576f5b0ca96c] Running
	I1217 20:03:23.662240  661899 system_pods.go:89] "kube-controller-manager-kindnet-601560" [95a0b8e8-e4bf-4335-a52c-90694161fdad] Running
	I1217 20:03:23.662245  661899 system_pods.go:89] "kube-proxy-bskt5" [96940e08-5a37-4ac7-821a-fd8a448cc3df] Running
	I1217 20:03:23.662251  661899 system_pods.go:89] "kube-scheduler-kindnet-601560" [a11357f9-84d1-431c-9218-9f6845d307a7] Running
	I1217 20:03:23.662258  661899 system_pods.go:89] "storage-provisioner" [fc116921-19d7-4900-b5ff-8ab0fab65ffa] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1217 20:03:23.662296  661899 retry.go:31] will retry after 256.842456ms: missing components: kube-dns
	I1217 20:03:23.924310  661899 system_pods.go:86] 8 kube-system pods found
	I1217 20:03:23.924362  661899 system_pods.go:89] "coredns-66bc5c9577-8jj68" [53709eea-dcb0-4d6f-a32e-32a9f2de468b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 20:03:23.924373  661899 system_pods.go:89] "etcd-kindnet-601560" [eacdb7f0-ce90-4edb-af21-889ecfc65870] Running
	I1217 20:03:23.924382  661899 system_pods.go:89] "kindnet-mfmbc" [ceb0146a-e10e-4b22-a499-9bf6b194b9ec] Running
	I1217 20:03:23.924391  661899 system_pods.go:89] "kube-apiserver-kindnet-601560" [d6c0fb58-d7a1-4f0a-b3ed-576f5b0ca96c] Running
	I1217 20:03:23.924396  661899 system_pods.go:89] "kube-controller-manager-kindnet-601560" [95a0b8e8-e4bf-4335-a52c-90694161fdad] Running
	I1217 20:03:23.924482  661899 system_pods.go:89] "kube-proxy-bskt5" [96940e08-5a37-4ac7-821a-fd8a448cc3df] Running
	I1217 20:03:23.924487  661899 system_pods.go:89] "kube-scheduler-kindnet-601560" [a11357f9-84d1-431c-9218-9f6845d307a7] Running
	I1217 20:03:23.924494  661899 system_pods.go:89] "storage-provisioner" [fc116921-19d7-4900-b5ff-8ab0fab65ffa] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1217 20:03:23.924518  661899 retry.go:31] will retry after 245.393606ms: missing components: kube-dns
	I1217 20:03:24.174541  661899 system_pods.go:86] 8 kube-system pods found
	I1217 20:03:24.174575  661899 system_pods.go:89] "coredns-66bc5c9577-8jj68" [53709eea-dcb0-4d6f-a32e-32a9f2de468b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 20:03:24.174581  661899 system_pods.go:89] "etcd-kindnet-601560" [eacdb7f0-ce90-4edb-af21-889ecfc65870] Running
	I1217 20:03:24.174588  661899 system_pods.go:89] "kindnet-mfmbc" [ceb0146a-e10e-4b22-a499-9bf6b194b9ec] Running
	I1217 20:03:24.174592  661899 system_pods.go:89] "kube-apiserver-kindnet-601560" [d6c0fb58-d7a1-4f0a-b3ed-576f5b0ca96c] Running
	I1217 20:03:24.174595  661899 system_pods.go:89] "kube-controller-manager-kindnet-601560" [95a0b8e8-e4bf-4335-a52c-90694161fdad] Running
	I1217 20:03:24.174604  661899 system_pods.go:89] "kube-proxy-bskt5" [96940e08-5a37-4ac7-821a-fd8a448cc3df] Running
	I1217 20:03:24.174607  661899 system_pods.go:89] "kube-scheduler-kindnet-601560" [a11357f9-84d1-431c-9218-9f6845d307a7] Running
	I1217 20:03:24.174612  661899 system_pods.go:89] "storage-provisioner" [fc116921-19d7-4900-b5ff-8ab0fab65ffa] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1217 20:03:24.174627  661899 retry.go:31] will retry after 471.649962ms: missing components: kube-dns
	I1217 20:03:24.653747  661899 system_pods.go:86] 8 kube-system pods found
	I1217 20:03:24.653810  661899 system_pods.go:89] "coredns-66bc5c9577-8jj68" [53709eea-dcb0-4d6f-a32e-32a9f2de468b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 20:03:24.653821  661899 system_pods.go:89] "etcd-kindnet-601560" [eacdb7f0-ce90-4edb-af21-889ecfc65870] Running
	I1217 20:03:24.653832  661899 system_pods.go:89] "kindnet-mfmbc" [ceb0146a-e10e-4b22-a499-9bf6b194b9ec] Running
	I1217 20:03:24.653840  661899 system_pods.go:89] "kube-apiserver-kindnet-601560" [d6c0fb58-d7a1-4f0a-b3ed-576f5b0ca96c] Running
	I1217 20:03:24.653854  661899 system_pods.go:89] "kube-controller-manager-kindnet-601560" [95a0b8e8-e4bf-4335-a52c-90694161fdad] Running
	I1217 20:03:24.653865  661899 system_pods.go:89] "kube-proxy-bskt5" [96940e08-5a37-4ac7-821a-fd8a448cc3df] Running
	I1217 20:03:24.653873  661899 system_pods.go:89] "kube-scheduler-kindnet-601560" [a11357f9-84d1-431c-9218-9f6845d307a7] Running
	I1217 20:03:24.653883  661899 system_pods.go:89] "storage-provisioner" [fc116921-19d7-4900-b5ff-8ab0fab65ffa] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1217 20:03:24.653914  661899 retry.go:31] will retry after 610.069127ms: missing components: kube-dns
	I1217 20:03:25.269066  661899 system_pods.go:86] 8 kube-system pods found
	I1217 20:03:25.269118  661899 system_pods.go:89] "coredns-66bc5c9577-8jj68" [53709eea-dcb0-4d6f-a32e-32a9f2de468b] Running
	I1217 20:03:25.269127  661899 system_pods.go:89] "etcd-kindnet-601560" [eacdb7f0-ce90-4edb-af21-889ecfc65870] Running
	I1217 20:03:25.269133  661899 system_pods.go:89] "kindnet-mfmbc" [ceb0146a-e10e-4b22-a499-9bf6b194b9ec] Running
	I1217 20:03:25.269138  661899 system_pods.go:89] "kube-apiserver-kindnet-601560" [d6c0fb58-d7a1-4f0a-b3ed-576f5b0ca96c] Running
	I1217 20:03:25.269144  661899 system_pods.go:89] "kube-controller-manager-kindnet-601560" [95a0b8e8-e4bf-4335-a52c-90694161fdad] Running
	I1217 20:03:25.269151  661899 system_pods.go:89] "kube-proxy-bskt5" [96940e08-5a37-4ac7-821a-fd8a448cc3df] Running
	I1217 20:03:25.269157  661899 system_pods.go:89] "kube-scheduler-kindnet-601560" [a11357f9-84d1-431c-9218-9f6845d307a7] Running
	I1217 20:03:25.269162  661899 system_pods.go:89] "storage-provisioner" [fc116921-19d7-4900-b5ff-8ab0fab65ffa] Running
	I1217 20:03:25.269173  661899 system_pods.go:126] duration metric: took 1.610390788s to wait for k8s-apps to be running ...
	I1217 20:03:25.269190  661899 system_svc.go:44] waiting for kubelet service to be running ....
	I1217 20:03:25.269306  661899 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 20:03:25.287846  661899 system_svc.go:56] duration metric: took 18.644374ms WaitForService to wait for kubelet
	I1217 20:03:25.287879  661899 kubeadm.go:587] duration metric: took 16.499179711s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1217 20:03:25.287990  661899 node_conditions.go:102] verifying NodePressure condition ...
	I1217 20:03:25.291437  661899 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1217 20:03:25.291478  661899 node_conditions.go:123] node cpu capacity is 8
	I1217 20:03:25.291507  661899 node_conditions.go:105] duration metric: took 3.509904ms to run NodePressure ...
	I1217 20:03:25.291524  661899 start.go:242] waiting for startup goroutines ...
	I1217 20:03:25.291537  661899 start.go:247] waiting for cluster config update ...
	I1217 20:03:25.291553  661899 start.go:256] writing updated cluster config ...
	I1217 20:03:25.291932  661899 ssh_runner.go:195] Run: rm -f paused
	I1217 20:03:25.297369  661899 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1217 20:03:25.301961  661899 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-8jj68" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:03:25.307475  661899 pod_ready.go:94] pod "coredns-66bc5c9577-8jj68" is "Ready"
	I1217 20:03:25.308469  661899 pod_ready.go:86] duration metric: took 6.47552ms for pod "coredns-66bc5c9577-8jj68" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:03:25.311036  661899 pod_ready.go:83] waiting for pod "etcd-kindnet-601560" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:03:25.316557  661899 pod_ready.go:94] pod "etcd-kindnet-601560" is "Ready"
	I1217 20:03:25.316589  661899 pod_ready.go:86] duration metric: took 5.529956ms for pod "etcd-kindnet-601560" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:03:25.319049  661899 pod_ready.go:83] waiting for pod "kube-apiserver-kindnet-601560" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:03:25.324345  661899 pod_ready.go:94] pod "kube-apiserver-kindnet-601560" is "Ready"
	I1217 20:03:25.324376  661899 pod_ready.go:86] duration metric: took 5.29554ms for pod "kube-apiserver-kindnet-601560" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:03:25.327429  661899 pod_ready.go:83] waiting for pod "kube-controller-manager-kindnet-601560" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:03:25.707165  661899 pod_ready.go:94] pod "kube-controller-manager-kindnet-601560" is "Ready"
	I1217 20:03:25.707196  661899 pod_ready.go:86] duration metric: took 379.735745ms for pod "kube-controller-manager-kindnet-601560" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:03:25.904301  661899 pod_ready.go:83] waiting for pod "kube-proxy-bskt5" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:03:26.302475  661899 pod_ready.go:94] pod "kube-proxy-bskt5" is "Ready"
	I1217 20:03:26.302513  661899 pod_ready.go:86] duration metric: took 398.177069ms for pod "kube-proxy-bskt5" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:03:26.503007  661899 pod_ready.go:83] waiting for pod "kube-scheduler-kindnet-601560" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:03:26.902326  661899 pod_ready.go:94] pod "kube-scheduler-kindnet-601560" is "Ready"
	I1217 20:03:26.902360  661899 pod_ready.go:86] duration metric: took 399.326567ms for pod "kube-scheduler-kindnet-601560" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:03:26.902375  661899 pod_ready.go:40] duration metric: took 1.604953329s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1217 20:03:26.947991  661899 start.go:625] kubectl: 1.35.0, cluster: 1.34.3 (minor skew: 1)
	I1217 20:03:26.950057  661899 out.go:179] * Done! kubectl is now configured to use "kindnet-601560" cluster and "default" namespace by default
	W1217 20:03:25.460486  663785 pod_ready.go:104] pod "coredns-66bc5c9577-wkvhv" is not "Ready", error: <nil>
	I1217 20:03:25.959552  663785 pod_ready.go:94] pod "coredns-66bc5c9577-wkvhv" is "Ready"
	I1217 20:03:25.959580  663785 pod_ready.go:86] duration metric: took 31.00625432s for pod "coredns-66bc5c9577-wkvhv" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:03:25.961836  663785 pod_ready.go:83] waiting for pod "etcd-embed-certs-147021" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:03:25.966292  663785 pod_ready.go:94] pod "etcd-embed-certs-147021" is "Ready"
	I1217 20:03:25.966315  663785 pod_ready.go:86] duration metric: took 4.451271ms for pod "etcd-embed-certs-147021" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:03:25.968353  663785 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-147021" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:03:25.972371  663785 pod_ready.go:94] pod "kube-apiserver-embed-certs-147021" is "Ready"
	I1217 20:03:25.972392  663785 pod_ready.go:86] duration metric: took 4.014818ms for pod "kube-apiserver-embed-certs-147021" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:03:25.974517  663785 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-147021" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:03:26.160111  663785 pod_ready.go:94] pod "kube-controller-manager-embed-certs-147021" is "Ready"
	I1217 20:03:26.160146  663785 pod_ready.go:86] duration metric: took 185.603925ms for pod "kube-controller-manager-embed-certs-147021" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:03:26.357699  663785 pod_ready.go:83] waiting for pod "kube-proxy-nwn9n" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:03:26.757915  663785 pod_ready.go:94] pod "kube-proxy-nwn9n" is "Ready"
	I1217 20:03:26.757951  663785 pod_ready.go:86] duration metric: took 400.221364ms for pod "kube-proxy-nwn9n" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:03:26.958004  663785 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-147021" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:03:27.357953  663785 pod_ready.go:94] pod "kube-scheduler-embed-certs-147021" is "Ready"
	I1217 20:03:27.357989  663785 pod_ready.go:86] duration metric: took 399.952058ms for pod "kube-scheduler-embed-certs-147021" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:03:27.358010  663785 pod_ready.go:40] duration metric: took 32.409077301s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1217 20:03:27.412909  663785 start.go:625] kubectl: 1.35.0, cluster: 1.34.3 (minor skew: 1)
	I1217 20:03:27.415038  663785 out.go:179] * Done! kubectl is now configured to use "embed-certs-147021" cluster and "default" namespace by default
	I1217 20:03:27.882278  670841 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.002438831s
	I1217 20:03:27.899395  670841 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1217 20:03:27.910240  670841 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1217 20:03:27.918636  670841 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1217 20:03:27.918906  670841 kubeadm.go:319] [mark-control-plane] Marking the node calico-601560 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1217 20:03:27.927196  670841 kubeadm.go:319] [bootstrap-token] Using token: 2aft5u.weq4vsf1xmievkcr
	I1217 20:03:27.928731  670841 out.go:252]   - Configuring RBAC rules ...
	I1217 20:03:27.928880  670841 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1217 20:03:27.933567  670841 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1217 20:03:27.939106  670841 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1217 20:03:27.941823  670841 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1217 20:03:27.944845  670841 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1217 20:03:27.948818  670841 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1217 20:03:28.288153  670841 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1217 20:03:28.708604  670841 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1217 20:03:29.289518  670841 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1217 20:03:29.290969  670841 kubeadm.go:319] 
	I1217 20:03:29.291067  670841 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1217 20:03:29.291094  670841 kubeadm.go:319] 
	I1217 20:03:29.291216  670841 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1217 20:03:29.291227  670841 kubeadm.go:319] 
	I1217 20:03:29.291262  670841 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1217 20:03:29.291355  670841 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1217 20:03:29.291412  670841 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1217 20:03:29.291417  670841 kubeadm.go:319] 
	I1217 20:03:29.291485  670841 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1217 20:03:29.291495  670841 kubeadm.go:319] 
	I1217 20:03:29.291655  670841 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1217 20:03:29.291680  670841 kubeadm.go:319] 
	I1217 20:03:29.291752  670841 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1217 20:03:29.291876  670841 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1217 20:03:29.291966  670841 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1217 20:03:29.291974  670841 kubeadm.go:319] 
	I1217 20:03:29.292141  670841 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1217 20:03:29.292255  670841 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1217 20:03:29.292265  670841 kubeadm.go:319] 
	I1217 20:03:29.292368  670841 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 2aft5u.weq4vsf1xmievkcr \
	I1217 20:03:29.292485  670841 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:8ef867ecc15c7bd9eb9f87ba84e4b5e1f9c90bbe1fbebab60bd7b5b08cd9129f \
	I1217 20:03:29.292528  670841 kubeadm.go:319] 	--control-plane 
	I1217 20:03:29.292547  670841 kubeadm.go:319] 
	I1217 20:03:29.292662  670841 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1217 20:03:29.292673  670841 kubeadm.go:319] 
	I1217 20:03:29.292781  670841 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 2aft5u.weq4vsf1xmievkcr \
	I1217 20:03:29.292943  670841 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:8ef867ecc15c7bd9eb9f87ba84e4b5e1f9c90bbe1fbebab60bd7b5b08cd9129f 
	I1217 20:03:29.295607  670841 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1045-gcp\n", err: exit status 1
	I1217 20:03:29.295717  670841 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1217 20:03:29.295751  670841 cni.go:84] Creating CNI manager for "calico"
	I1217 20:03:29.298347  670841 out.go:179] * Configuring Calico (Container Networking Interface) ...
	I1217 20:03:29.299707  670841 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.3/kubectl ...
	I1217 20:03:29.299731  670841 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (329943 bytes)
	I1217 20:03:29.315678  670841 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1217 20:03:30.131447  670841 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1217 20:03:30.131524  670841 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 20:03:30.131571  670841 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes calico-601560 minikube.k8s.io/updated_at=2025_12_17T20_03_30_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=2e96f676eb7e96389e85fe0658a4ede4c4ba6924 minikube.k8s.io/name=calico-601560 minikube.k8s.io/primary=true
	I1217 20:03:30.218315  670841 ops.go:34] apiserver oom_adj: -16
	I1217 20:03:30.218328  670841 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 20:03:30.718470  670841 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 20:03:31.219276  670841 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 20:03:31.719273  670841 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 20:03:32.218557  670841 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 20:03:32.718913  670841 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 20:03:33.219105  670841 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 20:03:33.718420  670841 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 20:03:34.218969  670841 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 20:03:34.351057  670841 kubeadm.go:1114] duration metric: took 4.219598036s to wait for elevateKubeSystemPrivileges
	I1217 20:03:34.351188  670841 kubeadm.go:403] duration metric: took 14.868171074s to StartCluster
	I1217 20:03:34.351285  670841 settings.go:142] acquiring lock: {Name:mk01c60672ff2b8f50b037d6096a0a4590636830 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 20:03:34.351387  670841 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22186-372245/kubeconfig
	I1217 20:03:34.355527  670841 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-372245/kubeconfig: {Name:mkbe8926b9014d2af611aee93b1188b72880b6c1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 20:03:34.356168  670841 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1217 20:03:34.356791  670841 config.go:182] Loaded profile config "calico-601560": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 20:03:34.356919  670841 start.go:236] Will wait 15m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1217 20:03:34.356946  670841 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1217 20:03:34.357552  670841 addons.go:70] Setting storage-provisioner=true in profile "calico-601560"
	I1217 20:03:34.357589  670841 addons.go:239] Setting addon storage-provisioner=true in "calico-601560"
	I1217 20:03:34.357634  670841 addons.go:70] Setting default-storageclass=true in profile "calico-601560"
	I1217 20:03:34.357661  670841 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "calico-601560"
	I1217 20:03:34.358038  670841 cli_runner.go:164] Run: docker container inspect calico-601560 --format={{.State.Status}}
	I1217 20:03:34.357640  670841 host.go:66] Checking if "calico-601560" exists ...
	I1217 20:03:34.358631  670841 cli_runner.go:164] Run: docker container inspect calico-601560 --format={{.State.Status}}
	I1217 20:03:34.360358  670841 out.go:179] * Verifying Kubernetes components...
	I1217 20:03:34.361516  670841 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 20:03:34.391461  670841 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1217 20:03:34.392909  670841 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 20:03:34.392936  670841 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1217 20:03:34.393004  670841 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-601560
	I1217 20:03:34.398934  670841 addons.go:239] Setting addon default-storageclass=true in "calico-601560"
	I1217 20:03:34.398980  670841 host.go:66] Checking if "calico-601560" exists ...
	I1217 20:03:34.399587  670841 cli_runner.go:164] Run: docker container inspect calico-601560 --format={{.State.Status}}
	I1217 20:03:34.442958  670841 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33493 SSHKeyPath:/home/jenkins/minikube-integration/22186-372245/.minikube/machines/calico-601560/id_rsa Username:docker}
	I1217 20:03:34.448994  670841 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1217 20:03:34.449027  670841 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1217 20:03:34.449314  670841 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-601560
	I1217 20:03:34.486757  670841 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33493 SSHKeyPath:/home/jenkins/minikube-integration/22186-372245/.minikube/machines/calico-601560/id_rsa Username:docker}
	I1217 20:03:34.586402  670841 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.94.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1217 20:03:34.663309  670841 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 20:03:34.668385  670841 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 20:03:34.688682  670841 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1217 20:03:34.941746  670841 start.go:977] {"host.minikube.internal": 192.168.94.1} host record injected into CoreDNS's ConfigMap
	I1217 20:03:34.945927  670841 node_ready.go:35] waiting up to 15m0s for node "calico-601560" to be "Ready" ...
	I1217 20:03:35.250259  670841 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1217 20:03:35.251908  670841 addons.go:530] duration metric: took 894.919701ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1217 20:03:35.448730  670841 kapi.go:214] "coredns" deployment in "kube-system" namespace and "calico-601560" context rescaled to 1 replicas
	W1217 20:03:36.949992  670841 node_ready.go:57] node "calico-601560" has "Ready":"False" status (will retry)
	I1217 20:03:38.949836  670841 node_ready.go:49] node "calico-601560" is "Ready"
	I1217 20:03:38.949864  670841 node_ready.go:38] duration metric: took 4.003891228s for node "calico-601560" to be "Ready" ...
	I1217 20:03:38.949883  670841 api_server.go:52] waiting for apiserver process to appear ...
	I1217 20:03:38.949932  670841 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:03:38.963343  670841 api_server.go:72] duration metric: took 4.605966431s to wait for apiserver process to appear ...
	I1217 20:03:38.963370  670841 api_server.go:88] waiting for apiserver healthz status ...
	I1217 20:03:38.963392  670841 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1217 20:03:38.969266  670841 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1217 20:03:38.970382  670841 api_server.go:141] control plane version: v1.34.3
	I1217 20:03:38.970416  670841 api_server.go:131] duration metric: took 7.037247ms to wait for apiserver health ...
	I1217 20:03:38.970428  670841 system_pods.go:43] waiting for kube-system pods to appear ...
	I1217 20:03:38.975014  670841 system_pods.go:59] 9 kube-system pods found
	I1217 20:03:38.975060  670841 system_pods.go:61] "calico-kube-controllers-5c676f698c-87rdt" [8344584d-c9d9-4d60-b9d6-8969b818dd96] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1217 20:03:38.975107  670841 system_pods.go:61] "calico-node-txfvq" [646b819d-dbb3-4aab-a10f-da140ba4c46c] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [ebpf-bootstrap]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1217 20:03:38.975121  670841 system_pods.go:61] "coredns-66bc5c9577-6zhb9" [396a593f-1e88-4649-9aac-1020901108e0] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 20:03:38.975127  670841 system_pods.go:61] "etcd-calico-601560" [a413bf3c-3090-4a23-888e-4172181ffdbc] Running
	I1217 20:03:38.975163  670841 system_pods.go:61] "kube-apiserver-calico-601560" [3802348e-f912-481e-bff3-0488a25434e5] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1217 20:03:38.975180  670841 system_pods.go:61] "kube-controller-manager-calico-601560" [02ef2474-6cde-416e-b6de-4b6d9922998d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1217 20:03:38.975218  670841 system_pods.go:61] "kube-proxy-l6w9t" [fd770e17-7fb8-432d-a348-d61c6f958e31] Running
	I1217 20:03:38.975233  670841 system_pods.go:61] "kube-scheduler-calico-601560" [70a6d486-c221-4c28-8726-6daf4871c8ab] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1217 20:03:38.975240  670841 system_pods.go:61] "storage-provisioner" [dceef9ee-88ec-4dec-9567-57006e2f8327] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1217 20:03:38.975248  670841 system_pods.go:74] duration metric: took 4.813123ms to wait for pod list to return data ...
	I1217 20:03:38.975259  670841 default_sa.go:34] waiting for default service account to be created ...
	I1217 20:03:38.979364  670841 default_sa.go:45] found service account: "default"
	I1217 20:03:38.979398  670841 default_sa.go:55] duration metric: took 4.131056ms for default service account to be created ...
	I1217 20:03:38.979413  670841 system_pods.go:116] waiting for k8s-apps to be running ...
	I1217 20:03:38.985950  670841 system_pods.go:86] 9 kube-system pods found
	I1217 20:03:38.986005  670841 system_pods.go:89] "calico-kube-controllers-5c676f698c-87rdt" [8344584d-c9d9-4d60-b9d6-8969b818dd96] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1217 20:03:38.986018  670841 system_pods.go:89] "calico-node-txfvq" [646b819d-dbb3-4aab-a10f-da140ba4c46c] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [ebpf-bootstrap]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1217 20:03:38.986034  670841 system_pods.go:89] "coredns-66bc5c9577-6zhb9" [396a593f-1e88-4649-9aac-1020901108e0] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 20:03:38.986040  670841 system_pods.go:89] "etcd-calico-601560" [a413bf3c-3090-4a23-888e-4172181ffdbc] Running
	I1217 20:03:38.986048  670841 system_pods.go:89] "kube-apiserver-calico-601560" [3802348e-f912-481e-bff3-0488a25434e5] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1217 20:03:38.986063  670841 system_pods.go:89] "kube-controller-manager-calico-601560" [02ef2474-6cde-416e-b6de-4b6d9922998d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1217 20:03:38.986070  670841 system_pods.go:89] "kube-proxy-l6w9t" [fd770e17-7fb8-432d-a348-d61c6f958e31] Running
	I1217 20:03:38.986106  670841 system_pods.go:89] "kube-scheduler-calico-601560" [70a6d486-c221-4c28-8726-6daf4871c8ab] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1217 20:03:38.986115  670841 system_pods.go:89] "storage-provisioner" [dceef9ee-88ec-4dec-9567-57006e2f8327] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1217 20:03:38.986153  670841 retry.go:31] will retry after 298.918033ms: missing components: kube-dns
	I1217 20:03:39.298711  670841 system_pods.go:86] 9 kube-system pods found
	I1217 20:03:39.298772  670841 system_pods.go:89] "calico-kube-controllers-5c676f698c-87rdt" [8344584d-c9d9-4d60-b9d6-8969b818dd96] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1217 20:03:39.298787  670841 system_pods.go:89] "calico-node-txfvq" [646b819d-dbb3-4aab-a10f-da140ba4c46c] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [ebpf-bootstrap]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1217 20:03:39.298799  670841 system_pods.go:89] "coredns-66bc5c9577-6zhb9" [396a593f-1e88-4649-9aac-1020901108e0] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 20:03:39.298807  670841 system_pods.go:89] "etcd-calico-601560" [a413bf3c-3090-4a23-888e-4172181ffdbc] Running
	I1217 20:03:39.298817  670841 system_pods.go:89] "kube-apiserver-calico-601560" [3802348e-f912-481e-bff3-0488a25434e5] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1217 20:03:39.298827  670841 system_pods.go:89] "kube-controller-manager-calico-601560" [02ef2474-6cde-416e-b6de-4b6d9922998d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1217 20:03:39.298834  670841 system_pods.go:89] "kube-proxy-l6w9t" [fd770e17-7fb8-432d-a348-d61c6f958e31] Running
	I1217 20:03:39.298842  670841 system_pods.go:89] "kube-scheduler-calico-601560" [70a6d486-c221-4c28-8726-6daf4871c8ab] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1217 20:03:39.298850  670841 system_pods.go:89] "storage-provisioner" [dceef9ee-88ec-4dec-9567-57006e2f8327] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1217 20:03:39.298883  670841 retry.go:31] will retry after 285.981632ms: missing components: kube-dns
	I1217 20:03:39.592693  670841 system_pods.go:86] 9 kube-system pods found
	I1217 20:03:39.592738  670841 system_pods.go:89] "calico-kube-controllers-5c676f698c-87rdt" [8344584d-c9d9-4d60-b9d6-8969b818dd96] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1217 20:03:39.592751  670841 system_pods.go:89] "calico-node-txfvq" [646b819d-dbb3-4aab-a10f-da140ba4c46c] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [ebpf-bootstrap]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1217 20:03:39.592762  670841 system_pods.go:89] "coredns-66bc5c9577-6zhb9" [396a593f-1e88-4649-9aac-1020901108e0] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 20:03:39.592769  670841 system_pods.go:89] "etcd-calico-601560" [a413bf3c-3090-4a23-888e-4172181ffdbc] Running
	I1217 20:03:39.592779  670841 system_pods.go:89] "kube-apiserver-calico-601560" [3802348e-f912-481e-bff3-0488a25434e5] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1217 20:03:39.592787  670841 system_pods.go:89] "kube-controller-manager-calico-601560" [02ef2474-6cde-416e-b6de-4b6d9922998d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1217 20:03:39.592809  670841 system_pods.go:89] "kube-proxy-l6w9t" [fd770e17-7fb8-432d-a348-d61c6f958e31] Running
	I1217 20:03:39.592822  670841 system_pods.go:89] "kube-scheduler-calico-601560" [70a6d486-c221-4c28-8726-6daf4871c8ab] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1217 20:03:39.592830  670841 system_pods.go:89] "storage-provisioner" [dceef9ee-88ec-4dec-9567-57006e2f8327] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1217 20:03:39.592854  670841 retry.go:31] will retry after 333.324304ms: missing components: kube-dns
	I1217 20:03:39.941982  670841 system_pods.go:86] 9 kube-system pods found
	I1217 20:03:39.942027  670841 system_pods.go:89] "calico-kube-controllers-5c676f698c-87rdt" [8344584d-c9d9-4d60-b9d6-8969b818dd96] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1217 20:03:39.942040  670841 system_pods.go:89] "calico-node-txfvq" [646b819d-dbb3-4aab-a10f-da140ba4c46c] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [ebpf-bootstrap]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1217 20:03:39.942050  670841 system_pods.go:89] "coredns-66bc5c9577-6zhb9" [396a593f-1e88-4649-9aac-1020901108e0] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 20:03:39.942057  670841 system_pods.go:89] "etcd-calico-601560" [a413bf3c-3090-4a23-888e-4172181ffdbc] Running
	I1217 20:03:39.942066  670841 system_pods.go:89] "kube-apiserver-calico-601560" [3802348e-f912-481e-bff3-0488a25434e5] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1217 20:03:39.942088  670841 system_pods.go:89] "kube-controller-manager-calico-601560" [02ef2474-6cde-416e-b6de-4b6d9922998d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1217 20:03:39.942095  670841 system_pods.go:89] "kube-proxy-l6w9t" [fd770e17-7fb8-432d-a348-d61c6f958e31] Running
	I1217 20:03:39.942103  670841 system_pods.go:89] "kube-scheduler-calico-601560" [70a6d486-c221-4c28-8726-6daf4871c8ab] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1217 20:03:39.942108  670841 system_pods.go:89] "storage-provisioner" [dceef9ee-88ec-4dec-9567-57006e2f8327] Running
	I1217 20:03:39.942134  670841 retry.go:31] will retry after 411.354945ms: missing components: kube-dns
	I1217 20:03:40.359871  670841 system_pods.go:86] 9 kube-system pods found
	I1217 20:03:40.359917  670841 system_pods.go:89] "calico-kube-controllers-5c676f698c-87rdt" [8344584d-c9d9-4d60-b9d6-8969b818dd96] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1217 20:03:40.359933  670841 system_pods.go:89] "calico-node-txfvq" [646b819d-dbb3-4aab-a10f-da140ba4c46c] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [ebpf-bootstrap]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1217 20:03:40.359942  670841 system_pods.go:89] "coredns-66bc5c9577-6zhb9" [396a593f-1e88-4649-9aac-1020901108e0] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 20:03:40.359949  670841 system_pods.go:89] "etcd-calico-601560" [a413bf3c-3090-4a23-888e-4172181ffdbc] Running
	I1217 20:03:40.359959  670841 system_pods.go:89] "kube-apiserver-calico-601560" [3802348e-f912-481e-bff3-0488a25434e5] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1217 20:03:40.359966  670841 system_pods.go:89] "kube-controller-manager-calico-601560" [02ef2474-6cde-416e-b6de-4b6d9922998d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1217 20:03:40.359980  670841 system_pods.go:89] "kube-proxy-l6w9t" [fd770e17-7fb8-432d-a348-d61c6f958e31] Running
	I1217 20:03:40.359988  670841 system_pods.go:89] "kube-scheduler-calico-601560" [70a6d486-c221-4c28-8726-6daf4871c8ab] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1217 20:03:40.359993  670841 system_pods.go:89] "storage-provisioner" [dceef9ee-88ec-4dec-9567-57006e2f8327] Running
	I1217 20:03:40.360014  670841 retry.go:31] will retry after 558.71846ms: missing components: kube-dns
	I1217 20:03:40.924257  670841 system_pods.go:86] 9 kube-system pods found
	I1217 20:03:40.924299  670841 system_pods.go:89] "calico-kube-controllers-5c676f698c-87rdt" [8344584d-c9d9-4d60-b9d6-8969b818dd96] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1217 20:03:40.924312  670841 system_pods.go:89] "calico-node-txfvq" [646b819d-dbb3-4aab-a10f-da140ba4c46c] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [ebpf-bootstrap]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1217 20:03:40.924322  670841 system_pods.go:89] "coredns-66bc5c9577-6zhb9" [396a593f-1e88-4649-9aac-1020901108e0] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 20:03:40.924329  670841 system_pods.go:89] "etcd-calico-601560" [a413bf3c-3090-4a23-888e-4172181ffdbc] Running
	I1217 20:03:40.924337  670841 system_pods.go:89] "kube-apiserver-calico-601560" [3802348e-f912-481e-bff3-0488a25434e5] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1217 20:03:40.924345  670841 system_pods.go:89] "kube-controller-manager-calico-601560" [02ef2474-6cde-416e-b6de-4b6d9922998d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1217 20:03:40.924352  670841 system_pods.go:89] "kube-proxy-l6w9t" [fd770e17-7fb8-432d-a348-d61c6f958e31] Running
	I1217 20:03:40.924358  670841 system_pods.go:89] "kube-scheduler-calico-601560" [70a6d486-c221-4c28-8726-6daf4871c8ab] Running
	I1217 20:03:40.924363  670841 system_pods.go:89] "storage-provisioner" [dceef9ee-88ec-4dec-9567-57006e2f8327] Running
	I1217 20:03:40.924384  670841 retry.go:31] will retry after 622.493188ms: missing components: kube-dns
	
	
	==> CRI-O <==
	Dec 17 20:03:05 embed-certs-147021 crio[572]: time="2025-12-17T20:03:05.192982774Z" level=info msg="Created container 4ebfa66d3b28eddecbfe86a86aaad09d79b307b1c6cdf47b395f4d1eba9148bf: kubernetes-dashboard/kubernetes-dashboard-855c9754f9-27rqf/kubernetes-dashboard" id=310ca953-a492-4702-8998-16bbf9e3585d name=/runtime.v1.RuntimeService/CreateContainer
	Dec 17 20:03:05 embed-certs-147021 crio[572]: time="2025-12-17T20:03:05.193944852Z" level=info msg="Starting container: 4ebfa66d3b28eddecbfe86a86aaad09d79b307b1c6cdf47b395f4d1eba9148bf" id=05fb3d16-6e5d-426c-a0ba-ad15cafc2222 name=/runtime.v1.RuntimeService/StartContainer
	Dec 17 20:03:05 embed-certs-147021 crio[572]: time="2025-12-17T20:03:05.196601784Z" level=info msg="Started container" PID=1744 containerID=4ebfa66d3b28eddecbfe86a86aaad09d79b307b1c6cdf47b395f4d1eba9148bf description=kubernetes-dashboard/kubernetes-dashboard-855c9754f9-27rqf/kubernetes-dashboard id=05fb3d16-6e5d-426c-a0ba-ad15cafc2222 name=/runtime.v1.RuntimeService/StartContainer sandboxID=0f8131503cb2aaf2cbfe502959515c05d035e1514b90b74859656ed4eb04939d
	Dec 17 20:03:19 embed-certs-147021 crio[572]: time="2025-12-17T20:03:19.57679923Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=280fe226-71c4-49ba-ba79-20ca9125afdc name=/runtime.v1.ImageService/ImageStatus
	Dec 17 20:03:19 embed-certs-147021 crio[572]: time="2025-12-17T20:03:19.579780824Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=dc27c7fc-646b-46da-ada0-6206e546ea13 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 20:03:19 embed-certs-147021 crio[572]: time="2025-12-17T20:03:19.583086349Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-84b8z/dashboard-metrics-scraper" id=85a02343-2ac2-4d64-8347-a303942e9209 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 17 20:03:19 embed-certs-147021 crio[572]: time="2025-12-17T20:03:19.583248964Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 20:03:19 embed-certs-147021 crio[572]: time="2025-12-17T20:03:19.592265481Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 20:03:19 embed-certs-147021 crio[572]: time="2025-12-17T20:03:19.592804809Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 20:03:19 embed-certs-147021 crio[572]: time="2025-12-17T20:03:19.621787138Z" level=info msg="Created container 7d20bd215cfe13c8e4ea6af1ef233c20548a09cd11187637eda9e9466894c33b: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-84b8z/dashboard-metrics-scraper" id=85a02343-2ac2-4d64-8347-a303942e9209 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 17 20:03:19 embed-certs-147021 crio[572]: time="2025-12-17T20:03:19.622549831Z" level=info msg="Starting container: 7d20bd215cfe13c8e4ea6af1ef233c20548a09cd11187637eda9e9466894c33b" id=6a978afe-b437-4e93-b8ff-a2d2d3b0c1c8 name=/runtime.v1.RuntimeService/StartContainer
	Dec 17 20:03:19 embed-certs-147021 crio[572]: time="2025-12-17T20:03:19.624942722Z" level=info msg="Started container" PID=1762 containerID=7d20bd215cfe13c8e4ea6af1ef233c20548a09cd11187637eda9e9466894c33b description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-84b8z/dashboard-metrics-scraper id=6a978afe-b437-4e93-b8ff-a2d2d3b0c1c8 name=/runtime.v1.RuntimeService/StartContainer sandboxID=a4020d8e1f7665847acdc9a95cc59e1e385806fb4667cd2e1c55df7880f1d07d
	Dec 17 20:03:19 embed-certs-147021 crio[572]: time="2025-12-17T20:03:19.742313119Z" level=info msg="Removing container: bc9c37e2406791371870f72e3b28aec2a49d95707bb0dbeff1532a5040f1f61e" id=7453514c-8ad3-4f89-9248-eef0917c0ade name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 17 20:03:19 embed-certs-147021 crio[572]: time="2025-12-17T20:03:19.754319235Z" level=info msg="Removed container bc9c37e2406791371870f72e3b28aec2a49d95707bb0dbeff1532a5040f1f61e: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-84b8z/dashboard-metrics-scraper" id=7453514c-8ad3-4f89-9248-eef0917c0ade name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 17 20:03:24 embed-certs-147021 crio[572]: time="2025-12-17T20:03:24.757193521Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=aa5f7118-ccd0-4fa2-8c56-8d08ba5f840e name=/runtime.v1.ImageService/ImageStatus
	Dec 17 20:03:24 embed-certs-147021 crio[572]: time="2025-12-17T20:03:24.758413451Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=eae74e2d-18c2-45a0-b44a-2d12fd6cb2da name=/runtime.v1.ImageService/ImageStatus
	Dec 17 20:03:24 embed-certs-147021 crio[572]: time="2025-12-17T20:03:24.759538275Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=89531671-2b5a-4110-83d0-966d2e6d8658 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 17 20:03:24 embed-certs-147021 crio[572]: time="2025-12-17T20:03:24.759678247Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 20:03:24 embed-certs-147021 crio[572]: time="2025-12-17T20:03:24.764542451Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 20:03:24 embed-certs-147021 crio[572]: time="2025-12-17T20:03:24.764733392Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/7d03fb8591ffad9224341ce3fc4b64ba94751e817f0559f188311253b196c956/merged/etc/passwd: no such file or directory"
	Dec 17 20:03:24 embed-certs-147021 crio[572]: time="2025-12-17T20:03:24.764763186Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/7d03fb8591ffad9224341ce3fc4b64ba94751e817f0559f188311253b196c956/merged/etc/group: no such file or directory"
	Dec 17 20:03:24 embed-certs-147021 crio[572]: time="2025-12-17T20:03:24.765043016Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 20:03:24 embed-certs-147021 crio[572]: time="2025-12-17T20:03:24.795263289Z" level=info msg="Created container a97831dd0cfa9d42e8bd7fafa0510d4d2b2a18070aac74aa247e55852e8e114e: kube-system/storage-provisioner/storage-provisioner" id=89531671-2b5a-4110-83d0-966d2e6d8658 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 17 20:03:24 embed-certs-147021 crio[572]: time="2025-12-17T20:03:24.795937565Z" level=info msg="Starting container: a97831dd0cfa9d42e8bd7fafa0510d4d2b2a18070aac74aa247e55852e8e114e" id=c3f3bc47-fe93-4e5e-a572-ae3066f18002 name=/runtime.v1.RuntimeService/StartContainer
	Dec 17 20:03:24 embed-certs-147021 crio[572]: time="2025-12-17T20:03:24.798285384Z" level=info msg="Started container" PID=1776 containerID=a97831dd0cfa9d42e8bd7fafa0510d4d2b2a18070aac74aa247e55852e8e114e description=kube-system/storage-provisioner/storage-provisioner id=c3f3bc47-fe93-4e5e-a572-ae3066f18002 name=/runtime.v1.RuntimeService/StartContainer sandboxID=09d85284adef2a550abdb9cd1b80ec22c440fd0dd844ec4ce3a5fd6a78991530
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	a97831dd0cfa9       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           18 seconds ago      Running             storage-provisioner         1                   09d85284adef2       storage-provisioner                          kube-system
	7d20bd215cfe1       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           23 seconds ago      Exited              dashboard-metrics-scraper   2                   a4020d8e1f766       dashboard-metrics-scraper-6ffb444bf9-84b8z   kubernetes-dashboard
	4ebfa66d3b28e       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   38 seconds ago      Running             kubernetes-dashboard        0                   0f8131503cb2a       kubernetes-dashboard-855c9754f9-27rqf        kubernetes-dashboard
	a12f276e6990e       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           49 seconds ago      Running             busybox                     1                   b91d101279124       busybox                                      default
	42c9fb76fa617       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           49 seconds ago      Running             coredns                     0                   c0ec26f00e677       coredns-66bc5c9577-wkvhv                     kube-system
	2766a8fcb5ebd       36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691                                           49 seconds ago      Running             kube-proxy                  0                   98e5a82c47446       kube-proxy-nwn9n                             kube-system
	537a5407ce604       4921d7a6dffa922dd679732ba4797085c4f39e9a53bee8b6fdb1d463e8571251                                           49 seconds ago      Running             kindnet-cni                 0                   e9975a064bca8       kindnet-qp6z8                                kube-system
	138ac303d832d       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           49 seconds ago      Exited              storage-provisioner         0                   09d85284adef2       storage-provisioner                          kube-system
	908edcd5f5289       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                           52 seconds ago      Running             etcd                        0                   6eca325648229       etcd-embed-certs-147021                      kube-system
	9609c0cfa32a6       aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c                                           52 seconds ago      Running             kube-apiserver              0                   22da4e7ab7c21       kube-apiserver-embed-certs-147021            kube-system
	65e71064f4502       5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942                                           52 seconds ago      Running             kube-controller-manager     0                   1dacda031bdb4       kube-controller-manager-embed-certs-147021   kube-system
	d703ea40f171a       aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78                                           52 seconds ago      Running             kube-scheduler              0                   2bd8523b3388c       kube-scheduler-embed-certs-147021            kube-system
	
	
	==> coredns [42c9fb76fa6175d615c9c78f7030f741afc7310992f335396b1970fe704fefae] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:58937 - 31191 "HINFO IN 2680119756027112146.7582208013871341038. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.424419354s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               embed-certs-147021
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-147021
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2e96f676eb7e96389e85fe0658a4ede4c4ba6924
	                    minikube.k8s.io/name=embed-certs-147021
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_17T20_01_55_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Dec 2025 20:01:52 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-147021
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Dec 2025 20:03:33 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Dec 2025 20:03:34 +0000   Wed, 17 Dec 2025 20:01:50 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Dec 2025 20:03:34 +0000   Wed, 17 Dec 2025 20:01:50 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Dec 2025 20:03:34 +0000   Wed, 17 Dec 2025 20:01:50 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Dec 2025 20:03:34 +0000   Wed, 17 Dec 2025 20:02:13 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    embed-certs-147021
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 99cc213c06a11cdf07b2a4d26942818a
	  System UUID:                c55125f4-5cb9-479d-a732-b6dc1626ae27
	  Boot ID:                    832664c8-407a-4bff-a432-3bbc3f20421e
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.3
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         87s
	  kube-system                 coredns-66bc5c9577-wkvhv                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     103s
	  kube-system                 etcd-embed-certs-147021                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         109s
	  kube-system                 kindnet-qp6z8                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      103s
	  kube-system                 kube-apiserver-embed-certs-147021             250m (3%)     0 (0%)      0 (0%)           0 (0%)         109s
	  kube-system                 kube-controller-manager-embed-certs-147021    200m (2%)     0 (0%)      0 (0%)           0 (0%)         109s
	  kube-system                 kube-proxy-nwn9n                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         103s
	  kube-system                 kube-scheduler-embed-certs-147021             100m (1%)     0 (0%)      0 (0%)           0 (0%)         109s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         103s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-84b8z    0 (0%)        0 (0%)      0 (0%)           0 (0%)         46s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-27rqf         0 (0%)        0 (0%)      0 (0%)           0 (0%)         46s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 102s                 kube-proxy       
	  Normal  Starting                 49s                  kube-proxy       
	  Normal  Starting                 113s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  113s (x8 over 113s)  kubelet          Node embed-certs-147021 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    113s (x8 over 113s)  kubelet          Node embed-certs-147021 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     113s (x8 over 113s)  kubelet          Node embed-certs-147021 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    109s                 kubelet          Node embed-certs-147021 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  109s                 kubelet          Node embed-certs-147021 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     109s                 kubelet          Node embed-certs-147021 status is now: NodeHasSufficientPID
	  Normal  Starting                 109s                 kubelet          Starting kubelet.
	  Normal  RegisteredNode           104s                 node-controller  Node embed-certs-147021 event: Registered Node embed-certs-147021 in Controller
	  Normal  NodeReady                90s                  kubelet          Node embed-certs-147021 status is now: NodeReady
	  Normal  Starting                 53s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  53s (x8 over 53s)    kubelet          Node embed-certs-147021 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    53s (x8 over 53s)    kubelet          Node embed-certs-147021 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     53s (x8 over 53s)    kubelet          Node embed-certs-147021 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           47s                  node-controller  Node embed-certs-147021 event: Registered Node embed-certs-147021 in Controller
	
	
	==> dmesg <==
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 02 bf cf fd 8a f3 08 06
	[  +0.000372] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 46 d7 50 f9 50 96 08 06
	[Dec17 19:26] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000011] ll header: 00000000: 12 b8 6e 1b fb 93 de a2 46 23 bd 1e 08 00
	[  +1.015318] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 12 b8 6e 1b fb 93 de a2 46 23 bd 1e 08 00
	[  +1.023837] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 12 b8 6e 1b fb 93 de a2 46 23 bd 1e 08 00
	[  +1.023872] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 12 b8 6e 1b fb 93 de a2 46 23 bd 1e 08 00
	[  +1.023881] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 12 b8 6e 1b fb 93 de a2 46 23 bd 1e 08 00
	[  +1.023899] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 12 b8 6e 1b fb 93 de a2 46 23 bd 1e 08 00
	[  +2.047807] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: 12 b8 6e 1b fb 93 de a2 46 23 bd 1e 08 00
	[  +4.031540] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000022] ll header: 00000000: 12 b8 6e 1b fb 93 de a2 46 23 bd 1e 08 00
	[  +8.319118] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: 12 b8 6e 1b fb 93 de a2 46 23 bd 1e 08 00
	[ +16.382218] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 12 b8 6e 1b fb 93 de a2 46 23 bd 1e 08 00
	[Dec17 19:27] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 12 b8 6e 1b fb 93 de a2 46 23 bd 1e 08 00
	
	
	==> etcd [908edcd5f5289ef7311867639a5128a59a15dad0583e878557accbf26efa79fb] <==
	{"level":"warn","ts":"2025-12-17T20:02:52.446622Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53716","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T20:02:52.455420Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53730","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T20:02:52.463791Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53764","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T20:02:52.472174Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53772","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T20:02:52.480345Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53804","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T20:02:52.489515Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53814","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T20:02:52.497207Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53826","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T20:02:52.504747Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53852","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T20:02:52.512635Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34142","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T20:02:52.519009Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34158","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T20:02:52.525477Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34172","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T20:02:52.533099Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34194","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T20:02:52.541256Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34212","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T20:02:52.548024Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34248","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T20:02:52.555218Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34262","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T20:02:52.563696Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34274","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T20:02:52.571559Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34292","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T20:02:52.579245Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34300","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T20:02:52.599349Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34314","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T20:02:52.607744Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34330","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T20:02:52.616224Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34354","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-17T20:03:10.257578Z","caller":"traceutil/trace.go:172","msg":"trace[1477758015] linearizableReadLoop","detail":"{readStateIndex:678; appliedIndex:678; }","duration":"172.845086ms","start":"2025-12-17T20:03:10.084709Z","end":"2025-12-17T20:03:10.257554Z","steps":["trace[1477758015] 'read index received'  (duration: 172.836184ms)","trace[1477758015] 'applied index is now lower than readState.Index'  (duration: 7.662µs)"],"step_count":2}
	{"level":"warn","ts":"2025-12-17T20:03:10.285189Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"200.413374ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-17T20:03:10.285276Z","caller":"traceutil/trace.go:172","msg":"trace[1788756996] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:646; }","duration":"200.546213ms","start":"2025-12-17T20:03:10.084703Z","end":"2025-12-17T20:03:10.285249Z","steps":["trace[1788756996] 'agreement among raft nodes before linearized reading'  (duration: 172.943275ms)","trace[1788756996] 'range keys from in-memory index tree'  (duration: 27.446754ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-17T20:03:10.285322Z","caller":"traceutil/trace.go:172","msg":"trace[384915860] transaction","detail":"{read_only:false; response_revision:647; number_of_response:1; }","duration":"203.549005ms","start":"2025-12-17T20:03:10.081762Z","end":"2025-12-17T20:03:10.285311Z","steps":["trace[384915860] 'process raft request'  (duration: 175.87534ms)","trace[384915860] 'compare'  (duration: 27.544232ms)"],"step_count":2}
	
	
	==> kernel <==
	 20:03:43 up  1:46,  0 user,  load average: 4.99, 3.98, 2.73
	Linux embed-certs-147021 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [537a5407ce604a89aeaa3dfb925609467a6bd3eeb7abd61d4ca526f32aafd92b] <==
	I1217 20:02:54.192454       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1217 20:02:54.192793       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1217 20:02:54.193039       1 main.go:148] setting mtu 1500 for CNI 
	I1217 20:02:54.193069       1 main.go:178] kindnetd IP family: "ipv4"
	I1217 20:02:54.193114       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-17T20:02:54Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1217 20:02:54.496225       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1217 20:02:54.496840       1 controller.go:381] "Waiting for informer caches to sync"
	I1217 20:02:54.496901       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1217 20:02:54.508283       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1217 20:02:54.890516       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1217 20:02:54.890563       1 metrics.go:72] Registering metrics
	I1217 20:02:54.890636       1 controller.go:711] "Syncing nftables rules"
	I1217 20:03:04.494971       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1217 20:03:04.495055       1 main.go:301] handling current node
	I1217 20:03:14.495284       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1217 20:03:14.495333       1 main.go:301] handling current node
	I1217 20:03:24.494995       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1217 20:03:24.495030       1 main.go:301] handling current node
	I1217 20:03:34.495006       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1217 20:03:34.495050       1 main.go:301] handling current node
	
	
	==> kube-apiserver [9609c0cfa32a680d1b01f25906eb3fc99966c8e66cc7b424a4aaf43f25353e40] <==
	I1217 20:02:53.293656       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1217 20:02:53.293651       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1217 20:02:53.293549       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1217 20:02:53.295215       1 aggregator.go:171] initial CRD sync complete...
	I1217 20:02:53.295257       1 autoregister_controller.go:144] Starting autoregister controller
	I1217 20:02:53.295283       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1217 20:02:53.295306       1 cache.go:39] Caches are synced for autoregister controller
	I1217 20:02:53.309069       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1217 20:02:53.316135       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	E1217 20:02:53.317605       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1217 20:02:53.344373       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1217 20:02:53.354797       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1217 20:02:53.551002       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1217 20:02:53.551154       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1217 20:02:53.671723       1 controller.go:667] quota admission added evaluator for: namespaces
	I1217 20:02:53.717025       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1217 20:02:53.739397       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1217 20:02:53.750299       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1217 20:02:53.802829       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.109.75.181"}
	I1217 20:02:53.814778       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.99.108.200"}
	I1217 20:02:54.197831       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1217 20:02:56.873944       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1217 20:02:56.874012       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1217 20:02:57.023283       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1217 20:02:57.123254       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [65e71064f45025b16a8eeb57a2312f4a95a800aca4e77340fff8eb1b3e67c18d] <==
	I1217 20:02:56.619357       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1217 20:02:56.619391       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1217 20:02:56.619416       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1217 20:02:56.619419       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1217 20:02:56.619781       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1217 20:02:56.619823       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1217 20:02:56.619847       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1217 20:02:56.619911       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1217 20:02:56.620037       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1217 20:02:56.620224       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1217 20:02:56.620328       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="embed-certs-147021"
	I1217 20:02:56.620361       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1217 20:02:56.621069       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1217 20:02:56.621124       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1217 20:02:56.621571       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1217 20:02:56.622827       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1217 20:02:56.626004       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1217 20:02:56.626149       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1217 20:02:56.626210       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1217 20:02:56.628262       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1217 20:02:56.628413       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1217 20:02:56.628463       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1217 20:02:56.628472       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1217 20:02:56.628480       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1217 20:02:56.648607       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [2766a8fcb5ebd7aeee551794853fcba5d9153eca108dbbefaecfd962e38c5f3d] <==
	I1217 20:02:53.979196       1 server_linux.go:53] "Using iptables proxy"
	I1217 20:02:54.051955       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1217 20:02:54.153110       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1217 20:02:54.153262       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1217 20:02:54.153392       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1217 20:02:54.173375       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1217 20:02:54.173479       1 server_linux.go:132] "Using iptables Proxier"
	I1217 20:02:54.178983       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1217 20:02:54.179404       1 server.go:527] "Version info" version="v1.34.3"
	I1217 20:02:54.179438       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1217 20:02:54.181027       1 config.go:106] "Starting endpoint slice config controller"
	I1217 20:02:54.181053       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1217 20:02:54.181124       1 config.go:200] "Starting service config controller"
	I1217 20:02:54.181135       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1217 20:02:54.181125       1 config.go:403] "Starting serviceCIDR config controller"
	I1217 20:02:54.181154       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1217 20:02:54.181455       1 config.go:309] "Starting node config controller"
	I1217 20:02:54.181511       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1217 20:02:54.181538       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1217 20:02:54.281214       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1217 20:02:54.281251       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1217 20:02:54.281565       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [d703ea40f171a6defb08dbaa7f51e4cb839d82c4c6df2ff17c3ac6931834a231] <==
	I1217 20:02:53.282345       1 serving.go:386] Generated self-signed cert in-memory
	I1217 20:02:54.124386       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.3"
	I1217 20:02:54.124413       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1217 20:02:54.129539       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1217 20:02:54.129592       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1217 20:02:54.129592       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1217 20:02:54.129627       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1217 20:02:54.129653       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1217 20:02:54.129714       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1217 20:02:54.129975       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1217 20:02:54.130040       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1217 20:02:54.230604       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1217 20:02:54.230840       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1217 20:02:54.230893       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	
	
	==> kubelet <==
	Dec 17 20:02:57 embed-certs-147021 kubelet[724]: I1217 20:02:57.173441     724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v4g4g\" (UniqueName: \"kubernetes.io/projected/5673bd8c-db08-4434-8ac7-ea0584623f5b-kube-api-access-v4g4g\") pod \"dashboard-metrics-scraper-6ffb444bf9-84b8z\" (UID: \"5673bd8c-db08-4434-8ac7-ea0584623f5b\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-84b8z"
	Dec 17 20:02:57 embed-certs-147021 kubelet[724]: I1217 20:02:57.173527     724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/0513181f-349f-406d-bee0-2833c0e27ccb-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-27rqf\" (UID: \"0513181f-349f-406d-bee0-2833c0e27ccb\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-27rqf"
	Dec 17 20:02:57 embed-certs-147021 kubelet[724]: I1217 20:02:57.173628     724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2wg6d\" (UniqueName: \"kubernetes.io/projected/0513181f-349f-406d-bee0-2833c0e27ccb-kube-api-access-2wg6d\") pod \"kubernetes-dashboard-855c9754f9-27rqf\" (UID: \"0513181f-349f-406d-bee0-2833c0e27ccb\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-27rqf"
	Dec 17 20:03:01 embed-certs-147021 kubelet[724]: I1217 20:03:01.687098     724 scope.go:117] "RemoveContainer" containerID="b29d05f5c1822dd2ad509ea814df77b389ed7bbc8af135fe6980bed69d679cb7"
	Dec 17 20:03:02 embed-certs-147021 kubelet[724]: I1217 20:03:02.691347     724 scope.go:117] "RemoveContainer" containerID="b29d05f5c1822dd2ad509ea814df77b389ed7bbc8af135fe6980bed69d679cb7"
	Dec 17 20:03:02 embed-certs-147021 kubelet[724]: I1217 20:03:02.691700     724 scope.go:117] "RemoveContainer" containerID="bc9c37e2406791371870f72e3b28aec2a49d95707bb0dbeff1532a5040f1f61e"
	Dec 17 20:03:02 embed-certs-147021 kubelet[724]: E1217 20:03:02.692045     724 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-84b8z_kubernetes-dashboard(5673bd8c-db08-4434-8ac7-ea0584623f5b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-84b8z" podUID="5673bd8c-db08-4434-8ac7-ea0584623f5b"
	Dec 17 20:03:03 embed-certs-147021 kubelet[724]: I1217 20:03:03.698423     724 scope.go:117] "RemoveContainer" containerID="bc9c37e2406791371870f72e3b28aec2a49d95707bb0dbeff1532a5040f1f61e"
	Dec 17 20:03:03 embed-certs-147021 kubelet[724]: E1217 20:03:03.698631     724 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-84b8z_kubernetes-dashboard(5673bd8c-db08-4434-8ac7-ea0584623f5b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-84b8z" podUID="5673bd8c-db08-4434-8ac7-ea0584623f5b"
	Dec 17 20:03:05 embed-certs-147021 kubelet[724]: I1217 20:03:05.721191     724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-27rqf" podStartSLOduration=0.998599373 podStartE2EDuration="8.721165239s" podCreationTimestamp="2025-12-17 20:02:57 +0000 UTC" firstStartedPulling="2025-12-17 20:02:57.41837367 +0000 UTC m=+6.983545829" lastFinishedPulling="2025-12-17 20:03:05.140939534 +0000 UTC m=+14.706111695" observedRunningTime="2025-12-17 20:03:05.720752329 +0000 UTC m=+15.285924495" watchObservedRunningTime="2025-12-17 20:03:05.721165239 +0000 UTC m=+15.286337406"
	Dec 17 20:03:06 embed-certs-147021 kubelet[724]: I1217 20:03:06.253002     724 scope.go:117] "RemoveContainer" containerID="bc9c37e2406791371870f72e3b28aec2a49d95707bb0dbeff1532a5040f1f61e"
	Dec 17 20:03:06 embed-certs-147021 kubelet[724]: E1217 20:03:06.253278     724 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-84b8z_kubernetes-dashboard(5673bd8c-db08-4434-8ac7-ea0584623f5b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-84b8z" podUID="5673bd8c-db08-4434-8ac7-ea0584623f5b"
	Dec 17 20:03:19 embed-certs-147021 kubelet[724]: I1217 20:03:19.576176     724 scope.go:117] "RemoveContainer" containerID="bc9c37e2406791371870f72e3b28aec2a49d95707bb0dbeff1532a5040f1f61e"
	Dec 17 20:03:19 embed-certs-147021 kubelet[724]: I1217 20:03:19.740934     724 scope.go:117] "RemoveContainer" containerID="bc9c37e2406791371870f72e3b28aec2a49d95707bb0dbeff1532a5040f1f61e"
	Dec 17 20:03:19 embed-certs-147021 kubelet[724]: I1217 20:03:19.741222     724 scope.go:117] "RemoveContainer" containerID="7d20bd215cfe13c8e4ea6af1ef233c20548a09cd11187637eda9e9466894c33b"
	Dec 17 20:03:19 embed-certs-147021 kubelet[724]: E1217 20:03:19.741484     724 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-84b8z_kubernetes-dashboard(5673bd8c-db08-4434-8ac7-ea0584623f5b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-84b8z" podUID="5673bd8c-db08-4434-8ac7-ea0584623f5b"
	Dec 17 20:03:24 embed-certs-147021 kubelet[724]: I1217 20:03:24.756658     724 scope.go:117] "RemoveContainer" containerID="138ac303d832d356d24635c198a00e7be358427c23bd8fdce8ba3aa0818c1350"
	Dec 17 20:03:26 embed-certs-147021 kubelet[724]: I1217 20:03:26.253843     724 scope.go:117] "RemoveContainer" containerID="7d20bd215cfe13c8e4ea6af1ef233c20548a09cd11187637eda9e9466894c33b"
	Dec 17 20:03:26 embed-certs-147021 kubelet[724]: E1217 20:03:26.254025     724 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-84b8z_kubernetes-dashboard(5673bd8c-db08-4434-8ac7-ea0584623f5b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-84b8z" podUID="5673bd8c-db08-4434-8ac7-ea0584623f5b"
	Dec 17 20:03:37 embed-certs-147021 kubelet[724]: I1217 20:03:37.576315     724 scope.go:117] "RemoveContainer" containerID="7d20bd215cfe13c8e4ea6af1ef233c20548a09cd11187637eda9e9466894c33b"
	Dec 17 20:03:37 embed-certs-147021 kubelet[724]: E1217 20:03:37.576525     724 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-84b8z_kubernetes-dashboard(5673bd8c-db08-4434-8ac7-ea0584623f5b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-84b8z" podUID="5673bd8c-db08-4434-8ac7-ea0584623f5b"
	Dec 17 20:03:39 embed-certs-147021 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 17 20:03:39 embed-certs-147021 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 17 20:03:39 embed-certs-147021 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 20:03:39 embed-certs-147021 systemd[1]: kubelet.service: Consumed 1.735s CPU time.
	
	
	==> kubernetes-dashboard [4ebfa66d3b28eddecbfe86a86aaad09d79b307b1c6cdf47b395f4d1eba9148bf] <==
	2025/12/17 20:03:05 Starting overwatch
	2025/12/17 20:03:05 Using namespace: kubernetes-dashboard
	2025/12/17 20:03:05 Using in-cluster config to connect to apiserver
	2025/12/17 20:03:05 Using secret token for csrf signing
	2025/12/17 20:03:05 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/12/17 20:03:05 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/12/17 20:03:05 Successful initial request to the apiserver, version: v1.34.3
	2025/12/17 20:03:05 Generating JWE encryption key
	2025/12/17 20:03:05 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/12/17 20:03:05 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/12/17 20:03:05 Initializing JWE encryption key from synchronized object
	2025/12/17 20:03:05 Creating in-cluster Sidecar client
	2025/12/17 20:03:05 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/17 20:03:05 Serving insecurely on HTTP port: 9090
	2025/12/17 20:03:35 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [138ac303d832d356d24635c198a00e7be358427c23bd8fdce8ba3aa0818c1350] <==
	I1217 20:02:53.934706       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1217 20:03:23.938561       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [a97831dd0cfa9d42e8bd7fafa0510d4d2b2a18070aac74aa247e55852e8e114e] <==
	I1217 20:03:24.813213       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1217 20:03:24.823054       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1217 20:03:24.823137       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1217 20:03:24.825815       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 20:03:28.281515       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 20:03:32.542579       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 20:03:36.142052       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 20:03:39.197700       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 20:03:42.221298       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 20:03:42.227792       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1217 20:03:42.228045       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1217 20:03:42.228301       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-147021_a7504213-e276-4f45-9ca2-4efa8775deb0!
	I1217 20:03:42.228552       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"910d36f2-445e-4325-a1df-6c5c1d1eea0a", APIVersion:"v1", ResourceVersion:"679", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-147021_a7504213-e276-4f45-9ca2-4efa8775deb0 became leader
	W1217 20:03:42.231297       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 20:03:42.240763       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1217 20:03:42.329306       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-147021_a7504213-e276-4f45-9ca2-4efa8775deb0!
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-147021 -n embed-certs-147021
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-147021 -n embed-certs-147021: exit status 2 (392.397813ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context embed-certs-147021 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect embed-certs-147021
helpers_test.go:244: (dbg) docker inspect embed-certs-147021:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "83dda83adbe19d01d49a5760f6d4c64b7758728b6bba04deace62e55f005deb8",
	        "Created": "2025-12-17T20:01:40.099829209Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 664321,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-17T20:02:43.901732416Z",
	            "FinishedAt": "2025-12-17T20:02:41.982920333Z"
	        },
	        "Image": "sha256:e3abeb065413b7566dd42e98e204ab3ad174790743f1f5cd427036c11b49d7f1",
	        "ResolvConfPath": "/var/lib/docker/containers/83dda83adbe19d01d49a5760f6d4c64b7758728b6bba04deace62e55f005deb8/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/83dda83adbe19d01d49a5760f6d4c64b7758728b6bba04deace62e55f005deb8/hostname",
	        "HostsPath": "/var/lib/docker/containers/83dda83adbe19d01d49a5760f6d4c64b7758728b6bba04deace62e55f005deb8/hosts",
	        "LogPath": "/var/lib/docker/containers/83dda83adbe19d01d49a5760f6d4c64b7758728b6bba04deace62e55f005deb8/83dda83adbe19d01d49a5760f6d4c64b7758728b6bba04deace62e55f005deb8-json.log",
	        "Name": "/embed-certs-147021",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-147021:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "embed-certs-147021",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "83dda83adbe19d01d49a5760f6d4c64b7758728b6bba04deace62e55f005deb8",
	                "LowerDir": "/var/lib/docker/overlay2/a2bd0701b2a8182e8c812ff61b8a44b36e1fa0dbd92285a2851592ab9f71eb11-init/diff:/var/lib/docker/overlay2/29727d664a8119dcd8d22d923cfdfa7d86f99088879bf2a113d907b51116eb38/diff",
	                "MergedDir": "/var/lib/docker/overlay2/a2bd0701b2a8182e8c812ff61b8a44b36e1fa0dbd92285a2851592ab9f71eb11/merged",
	                "UpperDir": "/var/lib/docker/overlay2/a2bd0701b2a8182e8c812ff61b8a44b36e1fa0dbd92285a2851592ab9f71eb11/diff",
	                "WorkDir": "/var/lib/docker/overlay2/a2bd0701b2a8182e8c812ff61b8a44b36e1fa0dbd92285a2851592ab9f71eb11/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-147021",
	                "Source": "/var/lib/docker/volumes/embed-certs-147021/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-147021",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-147021",
	                "name.minikube.sigs.k8s.io": "embed-certs-147021",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "cc26056bc6f76de2c3b659736415471e992f388dd5a85151decc80a15cb978ce",
	            "SandboxKey": "/var/run/docker/netns/cc26056bc6f7",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33488"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33489"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33492"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33490"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33491"
	                    }
	                ]
	            },
	            "Networks": {
	                "embed-certs-147021": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "d0eb8a5e286382abd016e9750b18658c10571b76b24cafa91dc20ab0a3e49d6a",
	                    "EndpointID": "fb72742245b5dc815cd26486471925d63a94e962c5b56408fbc6074e8f348698",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "MacAddress": "f2:6f:c3:63:be:c2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-147021",
	                        "83dda83adbe1"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-147021 -n embed-certs-147021
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-147021 -n embed-certs-147021: exit status 2 (422.317509ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-147021 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-147021 logs -n 25: (1.255248679s)
helpers_test.go:261: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────┬────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                     ARGS                                     │      PROFILE       │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────┼────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p auto-601560 sudo systemctl cat kubelet --no-pager                         │ auto-601560        │ jenkins │ v1.37.0 │ 17 Dec 25 20:03 UTC │ 17 Dec 25 20:03 UTC │
	│ ssh     │ -p auto-601560 sudo journalctl -xeu kubelet --all --full --no-pager          │ auto-601560        │ jenkins │ v1.37.0 │ 17 Dec 25 20:03 UTC │ 17 Dec 25 20:03 UTC │
	│ ssh     │ -p auto-601560 sudo cat /etc/kubernetes/kubelet.conf                         │ auto-601560        │ jenkins │ v1.37.0 │ 17 Dec 25 20:03 UTC │ 17 Dec 25 20:03 UTC │
	│ ssh     │ -p auto-601560 sudo cat /var/lib/kubelet/config.yaml                         │ auto-601560        │ jenkins │ v1.37.0 │ 17 Dec 25 20:03 UTC │ 17 Dec 25 20:03 UTC │
	│ ssh     │ -p auto-601560 sudo systemctl status docker --all --full --no-pager          │ auto-601560        │ jenkins │ v1.37.0 │ 17 Dec 25 20:03 UTC │                     │
	│ ssh     │ -p auto-601560 sudo systemctl cat docker --no-pager                          │ auto-601560        │ jenkins │ v1.37.0 │ 17 Dec 25 20:03 UTC │ 17 Dec 25 20:03 UTC │
	│ ssh     │ -p auto-601560 sudo cat /etc/docker/daemon.json                              │ auto-601560        │ jenkins │ v1.37.0 │ 17 Dec 25 20:03 UTC │                     │
	│ ssh     │ -p auto-601560 sudo docker system info                                       │ auto-601560        │ jenkins │ v1.37.0 │ 17 Dec 25 20:03 UTC │                     │
	│ image   │ embed-certs-147021 image list --format=json                                  │ embed-certs-147021 │ jenkins │ v1.37.0 │ 17 Dec 25 20:03 UTC │ 17 Dec 25 20:03 UTC │
	│ ssh     │ -p auto-601560 sudo systemctl status cri-docker --all --full --no-pager      │ auto-601560        │ jenkins │ v1.37.0 │ 17 Dec 25 20:03 UTC │                     │
	│ pause   │ -p embed-certs-147021 --alsologtostderr -v=1                                 │ embed-certs-147021 │ jenkins │ v1.37.0 │ 17 Dec 25 20:03 UTC │                     │
	│ ssh     │ -p auto-601560 sudo systemctl cat cri-docker --no-pager                      │ auto-601560        │ jenkins │ v1.37.0 │ 17 Dec 25 20:03 UTC │ 17 Dec 25 20:03 UTC │
	│ ssh     │ -p auto-601560 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf │ auto-601560        │ jenkins │ v1.37.0 │ 17 Dec 25 20:03 UTC │                     │
	│ ssh     │ -p auto-601560 sudo cat /usr/lib/systemd/system/cri-docker.service           │ auto-601560        │ jenkins │ v1.37.0 │ 17 Dec 25 20:03 UTC │ 17 Dec 25 20:03 UTC │
	│ ssh     │ -p auto-601560 sudo cri-dockerd --version                                    │ auto-601560        │ jenkins │ v1.37.0 │ 17 Dec 25 20:03 UTC │ 17 Dec 25 20:03 UTC │
	│ ssh     │ -p auto-601560 sudo systemctl status containerd --all --full --no-pager      │ auto-601560        │ jenkins │ v1.37.0 │ 17 Dec 25 20:03 UTC │                     │
	│ ssh     │ -p auto-601560 sudo systemctl cat containerd --no-pager                      │ auto-601560        │ jenkins │ v1.37.0 │ 17 Dec 25 20:03 UTC │ 17 Dec 25 20:03 UTC │
	│ ssh     │ -p auto-601560 sudo cat /lib/systemd/system/containerd.service               │ auto-601560        │ jenkins │ v1.37.0 │ 17 Dec 25 20:03 UTC │ 17 Dec 25 20:03 UTC │
	│ ssh     │ -p auto-601560 sudo cat /etc/containerd/config.toml                          │ auto-601560        │ jenkins │ v1.37.0 │ 17 Dec 25 20:03 UTC │ 17 Dec 25 20:03 UTC │
	│ ssh     │ -p auto-601560 sudo containerd config dump                                   │ auto-601560        │ jenkins │ v1.37.0 │ 17 Dec 25 20:03 UTC │ 17 Dec 25 20:03 UTC │
	│ ssh     │ -p auto-601560 sudo systemctl status crio --all --full --no-pager            │ auto-601560        │ jenkins │ v1.37.0 │ 17 Dec 25 20:03 UTC │ 17 Dec 25 20:03 UTC │
	│ ssh     │ -p auto-601560 sudo systemctl cat crio --no-pager                            │ auto-601560        │ jenkins │ v1.37.0 │ 17 Dec 25 20:03 UTC │ 17 Dec 25 20:03 UTC │
	│ ssh     │ -p auto-601560 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;  │ auto-601560        │ jenkins │ v1.37.0 │ 17 Dec 25 20:03 UTC │ 17 Dec 25 20:03 UTC │
	│ ssh     │ -p auto-601560 sudo crio config                                              │ auto-601560        │ jenkins │ v1.37.0 │ 17 Dec 25 20:03 UTC │ 17 Dec 25 20:03 UTC │
	│ delete  │ -p auto-601560                                                               │ auto-601560        │ jenkins │ v1.37.0 │ 17 Dec 25 20:03 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────┴────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/17 20:03:06
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1217 20:03:06.319588  670841 out.go:360] Setting OutFile to fd 1 ...
	I1217 20:03:06.319905  670841 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 20:03:06.319917  670841 out.go:374] Setting ErrFile to fd 2...
	I1217 20:03:06.319922  670841 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 20:03:06.320211  670841 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22186-372245/.minikube/bin
	I1217 20:03:06.320788  670841 out.go:368] Setting JSON to false
	I1217 20:03:06.322201  670841 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":6337,"bootTime":1765995449,"procs":340,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1217 20:03:06.322262  670841 start.go:143] virtualization: kvm guest
	I1217 20:03:06.324189  670841 out.go:179] * [calico-601560] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1217 20:03:06.325762  670841 out.go:179]   - MINIKUBE_LOCATION=22186
	I1217 20:03:06.325831  670841 notify.go:221] Checking for updates...
	I1217 20:03:06.328831  670841 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 20:03:06.330214  670841 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22186-372245/kubeconfig
	I1217 20:03:06.331788  670841 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22186-372245/.minikube
	I1217 20:03:06.333463  670841 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1217 20:03:06.334623  670841 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 20:03:06.336565  670841 config.go:182] Loaded profile config "auto-601560": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 20:03:06.336723  670841 config.go:182] Loaded profile config "embed-certs-147021": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 20:03:06.336847  670841 config.go:182] Loaded profile config "kindnet-601560": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 20:03:06.336979  670841 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 20:03:06.364886  670841 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1217 20:03:06.365000  670841 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 20:03:06.424631  670841 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:76 SystemTime:2025-12-17 20:03:06.413727225 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1217 20:03:06.424783  670841 docker.go:319] overlay module found
	I1217 20:03:06.427426  670841 out.go:179] * Using the docker driver based on user configuration
	I1217 20:03:06.428689  670841 start.go:309] selected driver: docker
	I1217 20:03:06.428712  670841 start.go:927] validating driver "docker" against <nil>
	I1217 20:03:06.428728  670841 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 20:03:06.429505  670841 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 20:03:06.493095  670841 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:76 SystemTime:2025-12-17 20:03:06.482003005 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1217 20:03:06.493274  670841 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1217 20:03:06.493508  670841 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1217 20:03:06.495120  670841 out.go:179] * Using Docker driver with root privileges
	I1217 20:03:06.496172  670841 cni.go:84] Creating CNI manager for "calico"
	I1217 20:03:06.496193  670841 start_flags.go:336] Found "Calico" CNI - setting NetworkPlugin=cni
	I1217 20:03:06.496285  670841 start.go:353] cluster config:
	{Name:calico-601560 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:calico-601560 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:
0 GPUs: AutoPauseInterval:1m0s}
	I1217 20:03:06.497678  670841 out.go:179] * Starting "calico-601560" primary control-plane node in "calico-601560" cluster
	I1217 20:03:06.498794  670841 cache.go:134] Beginning downloading kic base image for docker with crio
	I1217 20:03:06.499971  670841 out.go:179] * Pulling base image v0.0.48-1765966054-22186 ...
	I1217 20:03:06.501023  670841 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1217 20:03:06.501056  670841 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22186-372245/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4
	I1217 20:03:06.501066  670841 cache.go:65] Caching tarball of preloaded images
	I1217 20:03:06.501120  670841 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 in local docker daemon
	I1217 20:03:06.501214  670841 preload.go:238] Found /home/jenkins/minikube-integration/22186-372245/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1217 20:03:06.501231  670841 cache.go:68] Finished verifying existence of preloaded tar for v1.34.3 on crio
	I1217 20:03:06.501327  670841 profile.go:143] Saving config to /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/calico-601560/config.json ...
	I1217 20:03:06.501352  670841 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/calico-601560/config.json: {Name:mk6be0b9c208b74fea01fd07612f22127d8f64c7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 20:03:06.522048  670841 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 in local docker daemon, skipping pull
	I1217 20:03:06.522070  670841 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 exists in daemon, skipping load
	I1217 20:03:06.522115  670841 cache.go:243] Successfully downloaded all kic artifacts
	I1217 20:03:06.522175  670841 start.go:360] acquireMachinesLock for calico-601560: {Name:mke3872ce2d2a14c829289822bac63089cff205d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 20:03:06.522289  670841 start.go:364] duration metric: took 90.527µs to acquireMachinesLock for "calico-601560"
	I1217 20:03:06.522323  670841 start.go:93] Provisioning new machine with config: &{Name:calico-601560 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:calico-601560 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwar
ePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1217 20:03:06.522402  670841 start.go:125] createHost starting for "" (driver="docker")
	I1217 20:03:04.651635  661899 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1217 20:03:04.656398  661899 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.3/kubectl ...
	I1217 20:03:04.656420  661899 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2620 bytes)
	I1217 20:03:04.680303  661899 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1217 20:03:05.107294  661899 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1217 20:03:05.107396  661899 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 20:03:05.107429  661899 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes kindnet-601560 minikube.k8s.io/updated_at=2025_12_17T20_03_05_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=2e96f676eb7e96389e85fe0658a4ede4c4ba6924 minikube.k8s.io/name=kindnet-601560 minikube.k8s.io/primary=true
	I1217 20:03:05.118956  661899 ops.go:34] apiserver oom_adj: -16
	I1217 20:03:05.196882  661899 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 20:03:05.697507  661899 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 20:03:06.197676  661899 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 20:03:06.697717  661899 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 20:03:07.197033  661899 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 20:03:07.697552  661899 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 20:03:08.197878  661899 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	W1217 20:03:05.959230  663785 pod_ready.go:104] pod "coredns-66bc5c9577-wkvhv" is not "Ready", error: <nil>
	W1217 20:03:07.959482  663785 pod_ready.go:104] pod "coredns-66bc5c9577-wkvhv" is not "Ready", error: <nil>
	I1217 20:03:08.697649  661899 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 20:03:08.785919  661899 kubeadm.go:1114] duration metric: took 3.678598199s to wait for elevateKubeSystemPrivileges
	I1217 20:03:08.785983  661899 kubeadm.go:403] duration metric: took 17.944375799s to StartCluster
	I1217 20:03:08.786011  661899 settings.go:142] acquiring lock: {Name:mk01c60672ff2b8f50b037d6096a0a4590636830 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 20:03:08.786130  661899 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22186-372245/kubeconfig
	I1217 20:03:08.788373  661899 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-372245/kubeconfig: {Name:mkbe8926b9014d2af611aee93b1188b72880b6c1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 20:03:08.788664  661899 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1217 20:03:08.788666  661899 start.go:236] Will wait 15m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1217 20:03:08.788753  661899 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1217 20:03:08.788864  661899 config.go:182] Loaded profile config "kindnet-601560": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 20:03:08.788869  661899 addons.go:70] Setting storage-provisioner=true in profile "kindnet-601560"
	I1217 20:03:08.788893  661899 addons.go:239] Setting addon storage-provisioner=true in "kindnet-601560"
	I1217 20:03:08.788894  661899 addons.go:70] Setting default-storageclass=true in profile "kindnet-601560"
	I1217 20:03:08.788928  661899 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "kindnet-601560"
	I1217 20:03:08.788932  661899 host.go:66] Checking if "kindnet-601560" exists ...
	I1217 20:03:08.789357  661899 cli_runner.go:164] Run: docker container inspect kindnet-601560 --format={{.State.Status}}
	I1217 20:03:08.789569  661899 cli_runner.go:164] Run: docker container inspect kindnet-601560 --format={{.State.Status}}
	I1217 20:03:08.790648  661899 out.go:179] * Verifying Kubernetes components...
	I1217 20:03:08.795675  661899 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 20:03:08.821268  661899 addons.go:239] Setting addon default-storageclass=true in "kindnet-601560"
	I1217 20:03:08.821323  661899 host.go:66] Checking if "kindnet-601560" exists ...
	I1217 20:03:08.821828  661899 cli_runner.go:164] Run: docker container inspect kindnet-601560 --format={{.State.Status}}
	I1217 20:03:08.824049  661899 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	W1217 20:03:04.628802  660659 node_ready.go:57] node "auto-601560" has "Ready":"False" status (will retry)
	W1217 20:03:06.629486  660659 node_ready.go:57] node "auto-601560" has "Ready":"False" status (will retry)
	I1217 20:03:08.825880  661899 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 20:03:08.825913  661899 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1217 20:03:08.825975  661899 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-601560
	I1217 20:03:08.859514  661899 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1217 20:03:08.859542  661899 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1217 20:03:08.859610  661899 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kindnet-601560
	I1217 20:03:08.862345  661899 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33483 SSHKeyPath:/home/jenkins/minikube-integration/22186-372245/.minikube/machines/kindnet-601560/id_rsa Username:docker}
	I1217 20:03:08.904999  661899 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33483 SSHKeyPath:/home/jenkins/minikube-integration/22186-372245/.minikube/machines/kindnet-601560/id_rsa Username:docker}
	I1217 20:03:08.936633  661899 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.103.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1217 20:03:08.984519  661899 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 20:03:09.004738  661899 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 20:03:09.042429  661899 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1217 20:03:09.128183  661899 start.go:977] {"host.minikube.internal": 192.168.103.1} host record injected into CoreDNS's ConfigMap
	I1217 20:03:09.129752  661899 node_ready.go:35] waiting up to 15m0s for node "kindnet-601560" to be "Ready" ...
	I1217 20:03:09.375619  661899 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1217 20:03:06.524213  670841 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1217 20:03:06.524455  670841 start.go:159] libmachine.API.Create for "calico-601560" (driver="docker")
	I1217 20:03:06.524493  670841 client.go:173] LocalClient.Create starting
	I1217 20:03:06.524585  670841 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22186-372245/.minikube/certs/ca.pem
	I1217 20:03:06.524627  670841 main.go:143] libmachine: Decoding PEM data...
	I1217 20:03:06.524644  670841 main.go:143] libmachine: Parsing certificate...
	I1217 20:03:06.524695  670841 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22186-372245/.minikube/certs/cert.pem
	I1217 20:03:06.524713  670841 main.go:143] libmachine: Decoding PEM data...
	I1217 20:03:06.524723  670841 main.go:143] libmachine: Parsing certificate...
	I1217 20:03:06.525190  670841 cli_runner.go:164] Run: docker network inspect calico-601560 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1217 20:03:06.543887  670841 cli_runner.go:211] docker network inspect calico-601560 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1217 20:03:06.543968  670841 network_create.go:284] running [docker network inspect calico-601560] to gather additional debugging logs...
	I1217 20:03:06.543987  670841 cli_runner.go:164] Run: docker network inspect calico-601560
	W1217 20:03:06.562251  670841 cli_runner.go:211] docker network inspect calico-601560 returned with exit code 1
	I1217 20:03:06.562285  670841 network_create.go:287] error running [docker network inspect calico-601560]: docker network inspect calico-601560: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network calico-601560 not found
	I1217 20:03:06.562301  670841 network_create.go:289] output of [docker network inspect calico-601560]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network calico-601560 not found
	
	** /stderr **
	I1217 20:03:06.562456  670841 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1217 20:03:06.582566  670841 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-f64340259533 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:f6:0a:32:70:0d:35} reservation:<nil>}
	I1217 20:03:06.583606  670841 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-67abe6566c60 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:42:82:43:08:7c:e3} reservation:<nil>}
	I1217 20:03:06.584154  670841 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-f76d03f2ebfd IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:8e:bb:9b:fb:af:46} reservation:<nil>}
	I1217 20:03:06.584791  670841 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-e9e9e3776c58 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:72:28:1b:8b:8b:04} reservation:<nil>}
	I1217 20:03:06.585350  670841 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-d0eb8a5e2863 IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:4a:9f:ed:c7:db:49} reservation:<nil>}
	I1217 20:03:06.586190  670841 network.go:206] using free private subnet 192.168.94.0/24: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001eb0850}
	I1217 20:03:06.586214  670841 network_create.go:124] attempt to create docker network calico-601560 192.168.94.0/24 with gateway 192.168.94.1 and MTU of 1500 ...
	I1217 20:03:06.586281  670841 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=calico-601560 calico-601560
	I1217 20:03:06.640521  670841 network_create.go:108] docker network calico-601560 192.168.94.0/24 created
	I1217 20:03:06.640553  670841 kic.go:121] calculated static IP "192.168.94.2" for the "calico-601560" container
	I1217 20:03:06.640711  670841 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1217 20:03:06.659789  670841 cli_runner.go:164] Run: docker volume create calico-601560 --label name.minikube.sigs.k8s.io=calico-601560 --label created_by.minikube.sigs.k8s.io=true
	I1217 20:03:06.679536  670841 oci.go:103] Successfully created a docker volume calico-601560
	I1217 20:03:06.679611  670841 cli_runner.go:164] Run: docker run --rm --name calico-601560-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-601560 --entrypoint /usr/bin/test -v calico-601560:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 -d /var/lib
	I1217 20:03:07.115152  670841 oci.go:107] Successfully prepared a docker volume calico-601560
	I1217 20:03:07.115226  670841 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1217 20:03:07.115238  670841 kic.go:194] Starting extracting preloaded images to volume ...
	I1217 20:03:07.115294  670841 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22186-372245/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v calico-601560:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 -I lz4 -xf /preloaded.tar -C /extractDir
	I1217 20:03:09.376928  661899 addons.go:530] duration metric: took 588.17152ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1217 20:03:09.720426  661899 kapi.go:214] "coredns" deployment in "kube-system" namespace and "kindnet-601560" context rescaled to 1 replicas
	W1217 20:03:11.133740  661899 node_ready.go:57] node "kindnet-601560" has "Ready":"False" status (will retry)
	W1217 20:03:09.959713  663785 pod_ready.go:104] pod "coredns-66bc5c9577-wkvhv" is not "Ready", error: <nil>
	W1217 20:03:11.960862  663785 pod_ready.go:104] pod "coredns-66bc5c9577-wkvhv" is not "Ready", error: <nil>
	W1217 20:03:09.129477  660659 node_ready.go:57] node "auto-601560" has "Ready":"False" status (will retry)
	W1217 20:03:11.628957  660659 node_ready.go:57] node "auto-601560" has "Ready":"False" status (will retry)
	I1217 20:03:12.628696  660659 node_ready.go:49] node "auto-601560" is "Ready"
	I1217 20:03:12.628736  660659 node_ready.go:38] duration metric: took 12.503409226s for node "auto-601560" to be "Ready" ...
	I1217 20:03:12.628766  660659 api_server.go:52] waiting for apiserver process to appear ...
	I1217 20:03:12.628833  660659 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:03:12.643155  660659 api_server.go:72] duration metric: took 13.096462901s to wait for apiserver process to appear ...
	I1217 20:03:12.643198  660659 api_server.go:88] waiting for apiserver healthz status ...
	I1217 20:03:12.643223  660659 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1217 20:03:12.647637  660659 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1217 20:03:12.648699  660659 api_server.go:141] control plane version: v1.34.3
	I1217 20:03:12.648728  660659 api_server.go:131] duration metric: took 5.52285ms to wait for apiserver health ...
	I1217 20:03:12.648738  660659 system_pods.go:43] waiting for kube-system pods to appear ...
	I1217 20:03:12.653240  660659 system_pods.go:59] 8 kube-system pods found
	I1217 20:03:12.653279  660659 system_pods.go:61] "coredns-66bc5c9577-29z8k" [33275a41-d1d0-4ff0-b13e-0a61665252d0] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 20:03:12.653289  660659 system_pods.go:61] "etcd-auto-601560" [84ea5660-1c17-4d7d-ace7-4efc651b3b68] Running
	I1217 20:03:12.653298  660659 system_pods.go:61] "kindnet-pzcj6" [213fbf0a-12ab-4696-b808-4ca39c6913e6] Running
	I1217 20:03:12.653303  660659 system_pods.go:61] "kube-apiserver-auto-601560" [e539032b-c213-4d66-bc82-a86e8dd6e1cc] Running
	I1217 20:03:12.653309  660659 system_pods.go:61] "kube-controller-manager-auto-601560" [b977df20-d3c0-4ab0-9bc9-7b383e9eea6e] Running
	I1217 20:03:12.653317  660659 system_pods.go:61] "kube-proxy-6tvf2" [83944b8f-6c28-4789-a976-d78ac2a920e4] Running
	I1217 20:03:12.653322  660659 system_pods.go:61] "kube-scheduler-auto-601560" [cc30b861-c249-4802-a69c-0b5ecc90e94f] Running
	I1217 20:03:12.653330  660659 system_pods.go:61] "storage-provisioner" [f8c900c6-d970-49fa-9093-805bafea97d0] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1217 20:03:12.653341  660659 system_pods.go:74] duration metric: took 4.595246ms to wait for pod list to return data ...
	I1217 20:03:12.653352  660659 default_sa.go:34] waiting for default service account to be created ...
	I1217 20:03:12.655887  660659 default_sa.go:45] found service account: "default"
	I1217 20:03:12.655911  660659 default_sa.go:55] duration metric: took 2.54856ms for default service account to be created ...
	I1217 20:03:12.655923  660659 system_pods.go:116] waiting for k8s-apps to be running ...
	I1217 20:03:12.659065  660659 system_pods.go:86] 8 kube-system pods found
	I1217 20:03:12.659137  660659 system_pods.go:89] "coredns-66bc5c9577-29z8k" [33275a41-d1d0-4ff0-b13e-0a61665252d0] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 20:03:12.659146  660659 system_pods.go:89] "etcd-auto-601560" [84ea5660-1c17-4d7d-ace7-4efc651b3b68] Running
	I1217 20:03:12.659155  660659 system_pods.go:89] "kindnet-pzcj6" [213fbf0a-12ab-4696-b808-4ca39c6913e6] Running
	I1217 20:03:12.659160  660659 system_pods.go:89] "kube-apiserver-auto-601560" [e539032b-c213-4d66-bc82-a86e8dd6e1cc] Running
	I1217 20:03:12.659175  660659 system_pods.go:89] "kube-controller-manager-auto-601560" [b977df20-d3c0-4ab0-9bc9-7b383e9eea6e] Running
	I1217 20:03:12.659183  660659 system_pods.go:89] "kube-proxy-6tvf2" [83944b8f-6c28-4789-a976-d78ac2a920e4] Running
	I1217 20:03:12.659187  660659 system_pods.go:89] "kube-scheduler-auto-601560" [cc30b861-c249-4802-a69c-0b5ecc90e94f] Running
	I1217 20:03:12.659195  660659 system_pods.go:89] "storage-provisioner" [f8c900c6-d970-49fa-9093-805bafea97d0] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1217 20:03:12.659226  660659 retry.go:31] will retry after 262.559806ms: missing components: kube-dns
	I1217 20:03:12.926542  660659 system_pods.go:86] 8 kube-system pods found
	I1217 20:03:12.926578  660659 system_pods.go:89] "coredns-66bc5c9577-29z8k" [33275a41-d1d0-4ff0-b13e-0a61665252d0] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 20:03:12.926585  660659 system_pods.go:89] "etcd-auto-601560" [84ea5660-1c17-4d7d-ace7-4efc651b3b68] Running
	I1217 20:03:12.926592  660659 system_pods.go:89] "kindnet-pzcj6" [213fbf0a-12ab-4696-b808-4ca39c6913e6] Running
	I1217 20:03:12.926596  660659 system_pods.go:89] "kube-apiserver-auto-601560" [e539032b-c213-4d66-bc82-a86e8dd6e1cc] Running
	I1217 20:03:12.926600  660659 system_pods.go:89] "kube-controller-manager-auto-601560" [b977df20-d3c0-4ab0-9bc9-7b383e9eea6e] Running
	I1217 20:03:12.926604  660659 system_pods.go:89] "kube-proxy-6tvf2" [83944b8f-6c28-4789-a976-d78ac2a920e4] Running
	I1217 20:03:12.926607  660659 system_pods.go:89] "kube-scheduler-auto-601560" [cc30b861-c249-4802-a69c-0b5ecc90e94f] Running
	I1217 20:03:12.926618  660659 system_pods.go:89] "storage-provisioner" [f8c900c6-d970-49fa-9093-805bafea97d0] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1217 20:03:12.926647  660659 retry.go:31] will retry after 389.337916ms: missing components: kube-dns
	I1217 20:03:13.319939  660659 system_pods.go:86] 8 kube-system pods found
	I1217 20:03:13.319972  660659 system_pods.go:89] "coredns-66bc5c9577-29z8k" [33275a41-d1d0-4ff0-b13e-0a61665252d0] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 20:03:13.319979  660659 system_pods.go:89] "etcd-auto-601560" [84ea5660-1c17-4d7d-ace7-4efc651b3b68] Running
	I1217 20:03:13.319986  660659 system_pods.go:89] "kindnet-pzcj6" [213fbf0a-12ab-4696-b808-4ca39c6913e6] Running
	I1217 20:03:13.319991  660659 system_pods.go:89] "kube-apiserver-auto-601560" [e539032b-c213-4d66-bc82-a86e8dd6e1cc] Running
	I1217 20:03:13.319995  660659 system_pods.go:89] "kube-controller-manager-auto-601560" [b977df20-d3c0-4ab0-9bc9-7b383e9eea6e] Running
	I1217 20:03:13.319998  660659 system_pods.go:89] "kube-proxy-6tvf2" [83944b8f-6c28-4789-a976-d78ac2a920e4] Running
	I1217 20:03:13.320001  660659 system_pods.go:89] "kube-scheduler-auto-601560" [cc30b861-c249-4802-a69c-0b5ecc90e94f] Running
	I1217 20:03:13.320006  660659 system_pods.go:89] "storage-provisioner" [f8c900c6-d970-49fa-9093-805bafea97d0] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1217 20:03:13.320025  660659 retry.go:31] will retry after 343.958159ms: missing components: kube-dns
	I1217 20:03:13.668620  660659 system_pods.go:86] 8 kube-system pods found
	I1217 20:03:13.668649  660659 system_pods.go:89] "coredns-66bc5c9577-29z8k" [33275a41-d1d0-4ff0-b13e-0a61665252d0] Running
	I1217 20:03:13.668655  660659 system_pods.go:89] "etcd-auto-601560" [84ea5660-1c17-4d7d-ace7-4efc651b3b68] Running
	I1217 20:03:13.668659  660659 system_pods.go:89] "kindnet-pzcj6" [213fbf0a-12ab-4696-b808-4ca39c6913e6] Running
	I1217 20:03:13.668662  660659 system_pods.go:89] "kube-apiserver-auto-601560" [e539032b-c213-4d66-bc82-a86e8dd6e1cc] Running
	I1217 20:03:13.668668  660659 system_pods.go:89] "kube-controller-manager-auto-601560" [b977df20-d3c0-4ab0-9bc9-7b383e9eea6e] Running
	I1217 20:03:13.668672  660659 system_pods.go:89] "kube-proxy-6tvf2" [83944b8f-6c28-4789-a976-d78ac2a920e4] Running
	I1217 20:03:13.668675  660659 system_pods.go:89] "kube-scheduler-auto-601560" [cc30b861-c249-4802-a69c-0b5ecc90e94f] Running
	I1217 20:03:13.668678  660659 system_pods.go:89] "storage-provisioner" [f8c900c6-d970-49fa-9093-805bafea97d0] Running
	I1217 20:03:13.668686  660659 system_pods.go:126] duration metric: took 1.012755801s to wait for k8s-apps to be running ...
	I1217 20:03:13.668696  660659 system_svc.go:44] waiting for kubelet service to be running ....
	I1217 20:03:13.668742  660659 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 20:03:13.681722  660659 system_svc.go:56] duration metric: took 13.011537ms WaitForService to wait for kubelet
	I1217 20:03:13.681757  660659 kubeadm.go:587] duration metric: took 14.135072375s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1217 20:03:13.681783  660659 node_conditions.go:102] verifying NodePressure condition ...
	I1217 20:03:13.685089  660659 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1217 20:03:13.685119  660659 node_conditions.go:123] node cpu capacity is 8
	I1217 20:03:13.685139  660659 node_conditions.go:105] duration metric: took 3.349536ms to run NodePressure ...
	I1217 20:03:13.685155  660659 start.go:242] waiting for startup goroutines ...
	I1217 20:03:13.685164  660659 start.go:247] waiting for cluster config update ...
	I1217 20:03:13.685188  660659 start.go:256] writing updated cluster config ...
	I1217 20:03:13.685486  660659 ssh_runner.go:195] Run: rm -f paused
	I1217 20:03:13.689795  660659 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1217 20:03:13.693579  660659 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-29z8k" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:03:13.698310  660659 pod_ready.go:94] pod "coredns-66bc5c9577-29z8k" is "Ready"
	I1217 20:03:13.698341  660659 pod_ready.go:86] duration metric: took 4.737405ms for pod "coredns-66bc5c9577-29z8k" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:03:13.700751  660659 pod_ready.go:83] waiting for pod "etcd-auto-601560" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:03:13.705761  660659 pod_ready.go:94] pod "etcd-auto-601560" is "Ready"
	I1217 20:03:13.705787  660659 pod_ready.go:86] duration metric: took 5.009682ms for pod "etcd-auto-601560" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:03:13.707955  660659 pod_ready.go:83] waiting for pod "kube-apiserver-auto-601560" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:03:13.711879  660659 pod_ready.go:94] pod "kube-apiserver-auto-601560" is "Ready"
	I1217 20:03:13.711898  660659 pod_ready.go:86] duration metric: took 3.91844ms for pod "kube-apiserver-auto-601560" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:03:13.714007  660659 pod_ready.go:83] waiting for pod "kube-controller-manager-auto-601560" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:03:14.094423  660659 pod_ready.go:94] pod "kube-controller-manager-auto-601560" is "Ready"
	I1217 20:03:14.094453  660659 pod_ready.go:86] duration metric: took 380.4236ms for pod "kube-controller-manager-auto-601560" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:03:14.294921  660659 pod_ready.go:83] waiting for pod "kube-proxy-6tvf2" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:03:14.694779  660659 pod_ready.go:94] pod "kube-proxy-6tvf2" is "Ready"
	I1217 20:03:14.694808  660659 pod_ready.go:86] duration metric: took 399.855347ms for pod "kube-proxy-6tvf2" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:03:14.894375  660659 pod_ready.go:83] waiting for pod "kube-scheduler-auto-601560" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:03:15.293849  660659 pod_ready.go:94] pod "kube-scheduler-auto-601560" is "Ready"
	I1217 20:03:15.293878  660659 pod_ready.go:86] duration metric: took 399.470896ms for pod "kube-scheduler-auto-601560" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:03:15.293890  660659 pod_ready.go:40] duration metric: took 1.604061758s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1217 20:03:15.342087  660659 start.go:625] kubectl: 1.35.0, cluster: 1.34.3 (minor skew: 1)
	I1217 20:03:15.344166  660659 out.go:179] * Done! kubectl is now configured to use "auto-601560" cluster and "default" namespace by default
	I1217 20:03:11.349010  670841 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/22186-372245/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v calico-601560:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 -I lz4 -xf /preloaded.tar -C /extractDir: (4.233672828s)
	I1217 20:03:11.349045  670841 kic.go:203] duration metric: took 4.233803032s to extract preloaded images to volume ...
	W1217 20:03:11.349172  670841 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1217 20:03:11.349206  670841 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1217 20:03:11.349268  670841 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1217 20:03:11.416488  670841 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname calico-601560 --name calico-601560 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-601560 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=calico-601560 --network calico-601560 --ip 192.168.94.2 --volume calico-601560:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0
	I1217 20:03:11.743903  670841 cli_runner.go:164] Run: docker container inspect calico-601560 --format={{.State.Running}}
	I1217 20:03:11.765532  670841 cli_runner.go:164] Run: docker container inspect calico-601560 --format={{.State.Status}}
	I1217 20:03:11.788594  670841 cli_runner.go:164] Run: docker exec calico-601560 stat /var/lib/dpkg/alternatives/iptables
	I1217 20:03:11.844141  670841 oci.go:144] the created container "calico-601560" has a running status.
	I1217 20:03:11.844177  670841 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/22186-372245/.minikube/machines/calico-601560/id_rsa...
	I1217 20:03:11.921813  670841 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/22186-372245/.minikube/machines/calico-601560/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1217 20:03:11.952850  670841 cli_runner.go:164] Run: docker container inspect calico-601560 --format={{.State.Status}}
	I1217 20:03:11.977649  670841 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1217 20:03:11.977679  670841 kic_runner.go:114] Args: [docker exec --privileged calico-601560 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1217 20:03:12.043236  670841 cli_runner.go:164] Run: docker container inspect calico-601560 --format={{.State.Status}}
	I1217 20:03:12.068255  670841 machine.go:94] provisionDockerMachine start ...
	I1217 20:03:12.068380  670841 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-601560
	I1217 20:03:12.099514  670841 main.go:143] libmachine: Using SSH client type: native
	I1217 20:03:12.099950  670841 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33493 <nil> <nil>}
	I1217 20:03:12.099981  670841 main.go:143] libmachine: About to run SSH command:
	hostname
	I1217 20:03:12.100849  670841 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:35966->127.0.0.1:33493: read: connection reset by peer
	I1217 20:03:15.249372  670841 main.go:143] libmachine: SSH cmd err, output: <nil>: calico-601560
	
	I1217 20:03:15.249406  670841 ubuntu.go:182] provisioning hostname "calico-601560"
	I1217 20:03:15.249471  670841 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-601560
	I1217 20:03:15.268393  670841 main.go:143] libmachine: Using SSH client type: native
	I1217 20:03:15.268658  670841 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33493 <nil> <nil>}
	I1217 20:03:15.268673  670841 main.go:143] libmachine: About to run SSH command:
	sudo hostname calico-601560 && echo "calico-601560" | sudo tee /etc/hostname
	I1217 20:03:15.431953  670841 main.go:143] libmachine: SSH cmd err, output: <nil>: calico-601560
	
	I1217 20:03:15.432028  670841 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-601560
	I1217 20:03:15.453345  670841 main.go:143] libmachine: Using SSH client type: native
	I1217 20:03:15.453599  670841 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33493 <nil> <nil>}
	I1217 20:03:15.453623  670841 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scalico-601560' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 calico-601560/g' /etc/hosts;
				else 
					echo '127.0.1.1 calico-601560' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1217 20:03:15.606909  670841 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1217 20:03:15.606954  670841 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/22186-372245/.minikube CaCertPath:/home/jenkins/minikube-integration/22186-372245/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22186-372245/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22186-372245/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22186-372245/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22186-372245/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22186-372245/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22186-372245/.minikube}
	I1217 20:03:15.606981  670841 ubuntu.go:190] setting up certificates
	I1217 20:03:15.607003  670841 provision.go:84] configureAuth start
	I1217 20:03:15.607070  670841 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-601560
	I1217 20:03:15.625605  670841 provision.go:143] copyHostCerts
	I1217 20:03:15.625682  670841 exec_runner.go:144] found /home/jenkins/minikube-integration/22186-372245/.minikube/ca.pem, removing ...
	I1217 20:03:15.625697  670841 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22186-372245/.minikube/ca.pem
	I1217 20:03:15.625771  670841 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22186-372245/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22186-372245/.minikube/ca.pem (1082 bytes)
	I1217 20:03:15.625883  670841 exec_runner.go:144] found /home/jenkins/minikube-integration/22186-372245/.minikube/cert.pem, removing ...
	I1217 20:03:15.625892  670841 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22186-372245/.minikube/cert.pem
	I1217 20:03:15.625921  670841 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22186-372245/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22186-372245/.minikube/cert.pem (1123 bytes)
	I1217 20:03:15.626010  670841 exec_runner.go:144] found /home/jenkins/minikube-integration/22186-372245/.minikube/key.pem, removing ...
	I1217 20:03:15.626018  670841 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22186-372245/.minikube/key.pem
	I1217 20:03:15.626044  670841 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22186-372245/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22186-372245/.minikube/key.pem (1675 bytes)
	I1217 20:03:15.626144  670841 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22186-372245/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22186-372245/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22186-372245/.minikube/certs/ca-key.pem org=jenkins.calico-601560 san=[127.0.0.1 192.168.94.2 calico-601560 localhost minikube]
	I1217 20:03:15.762350  670841 provision.go:177] copyRemoteCerts
	I1217 20:03:15.762417  670841 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1217 20:03:15.762468  670841 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-601560
	I1217 20:03:15.782377  670841 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33493 SSHKeyPath:/home/jenkins/minikube-integration/22186-372245/.minikube/machines/calico-601560/id_rsa Username:docker}
	I1217 20:03:15.892508  670841 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1217 20:03:15.914325  670841 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1217 20:03:15.932134  670841 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1217 20:03:15.950593  670841 provision.go:87] duration metric: took 343.569515ms to configureAuth
	I1217 20:03:15.950626  670841 ubuntu.go:206] setting minikube options for container-runtime
	I1217 20:03:15.950844  670841 config.go:182] Loaded profile config "calico-601560": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 20:03:15.951019  670841 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-601560
	I1217 20:03:15.970726  670841 main.go:143] libmachine: Using SSH client type: native
	I1217 20:03:15.970969  670841 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84e1a0] 0x850e40 <nil>  [] 0s} 127.0.0.1 33493 <nil> <nil>}
	I1217 20:03:15.970989  670841 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1217 20:03:16.273601  670841 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1217 20:03:16.273650  670841 machine.go:97] duration metric: took 4.205368825s to provisionDockerMachine
	I1217 20:03:16.273663  670841 client.go:176] duration metric: took 9.749161975s to LocalClient.Create
	I1217 20:03:16.273684  670841 start.go:167] duration metric: took 9.749229635s to libmachine.API.Create "calico-601560"
	I1217 20:03:16.273694  670841 start.go:293] postStartSetup for "calico-601560" (driver="docker")
	I1217 20:03:16.273713  670841 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1217 20:03:16.273797  670841 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1217 20:03:16.273849  670841 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-601560
	I1217 20:03:16.293164  670841 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33493 SSHKeyPath:/home/jenkins/minikube-integration/22186-372245/.minikube/machines/calico-601560/id_rsa Username:docker}
	I1217 20:03:16.405690  670841 ssh_runner.go:195] Run: cat /etc/os-release
	I1217 20:03:16.411884  670841 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1217 20:03:16.411924  670841 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1217 20:03:16.411939  670841 filesync.go:126] Scanning /home/jenkins/minikube-integration/22186-372245/.minikube/addons for local assets ...
	I1217 20:03:16.411998  670841 filesync.go:126] Scanning /home/jenkins/minikube-integration/22186-372245/.minikube/files for local assets ...
	I1217 20:03:16.412091  670841 filesync.go:149] local asset: /home/jenkins/minikube-integration/22186-372245/.minikube/files/etc/ssl/certs/3757972.pem -> 3757972.pem in /etc/ssl/certs
	I1217 20:03:16.412255  670841 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1217 20:03:16.425334  670841 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/files/etc/ssl/certs/3757972.pem --> /etc/ssl/certs/3757972.pem (1708 bytes)
	I1217 20:03:16.457378  670841 start.go:296] duration metric: took 183.663577ms for postStartSetup
	I1217 20:03:16.458843  670841 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-601560
	I1217 20:03:16.488319  670841 profile.go:143] Saving config to /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/calico-601560/config.json ...
	I1217 20:03:16.488587  670841 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 20:03:16.488637  670841 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-601560
	I1217 20:03:16.516820  670841 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33493 SSHKeyPath:/home/jenkins/minikube-integration/22186-372245/.minikube/machines/calico-601560/id_rsa Username:docker}
	I1217 20:03:16.638115  670841 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1217 20:03:16.643746  670841 start.go:128] duration metric: took 10.121327214s to createHost
	I1217 20:03:16.643773  670841 start.go:83] releasing machines lock for "calico-601560", held for 10.121467733s
	I1217 20:03:16.643838  670841 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-601560
	I1217 20:03:16.668442  670841 ssh_runner.go:195] Run: cat /version.json
	I1217 20:03:16.668510  670841 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-601560
	I1217 20:03:16.668528  670841 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1217 20:03:16.668717  670841 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-601560
	I1217 20:03:16.695591  670841 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33493 SSHKeyPath:/home/jenkins/minikube-integration/22186-372245/.minikube/machines/calico-601560/id_rsa Username:docker}
	I1217 20:03:16.696443  670841 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33493 SSHKeyPath:/home/jenkins/minikube-integration/22186-372245/.minikube/machines/calico-601560/id_rsa Username:docker}
	I1217 20:03:16.896713  670841 ssh_runner.go:195] Run: systemctl --version
	I1217 20:03:16.905676  670841 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1217 20:03:16.958282  670841 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1217 20:03:16.964720  670841 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1217 20:03:16.964800  670841 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1217 20:03:16.997994  670841 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1217 20:03:16.998019  670841 start.go:496] detecting cgroup driver to use...
	I1217 20:03:16.998053  670841 detect.go:190] detected "systemd" cgroup driver on host os
	I1217 20:03:16.998114  670841 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1217 20:03:17.018338  670841 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1217 20:03:17.035675  670841 docker.go:218] disabling cri-docker service (if available) ...
	I1217 20:03:17.035750  670841 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1217 20:03:17.058121  670841 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1217 20:03:17.083226  670841 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1217 20:03:17.200624  670841 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1217 20:03:17.313524  670841 docker.go:234] disabling docker service ...
	I1217 20:03:17.313613  670841 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1217 20:03:17.335716  670841 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1217 20:03:17.351352  670841 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1217 20:03:17.465295  670841 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1217 20:03:17.589889  670841 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1217 20:03:17.606031  670841 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1217 20:03:17.621660  670841 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1217 20:03:17.621915  670841 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:03:17.634404  670841 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1217 20:03:17.634486  670841 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:03:17.645731  670841 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:03:17.656201  670841 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:03:17.667047  670841 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1217 20:03:17.676894  670841 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:03:17.687247  670841 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:03:17.703651  670841 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1217 20:03:17.713807  670841 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1217 20:03:17.723176  670841 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1217 20:03:17.731324  670841 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 20:03:17.821245  670841 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1217 20:03:18.261378  670841 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1217 20:03:18.261483  670841 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1217 20:03:18.265978  670841 start.go:564] Will wait 60s for crictl version
	I1217 20:03:18.266037  670841 ssh_runner.go:195] Run: which crictl
	I1217 20:03:18.269929  670841 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1217 20:03:18.296064  670841 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.3
	RuntimeApiVersion:  v1
	I1217 20:03:18.296202  670841 ssh_runner.go:195] Run: crio --version
	I1217 20:03:18.326516  670841 ssh_runner.go:195] Run: crio --version
	I1217 20:03:18.357661  670841 out.go:179] * Preparing Kubernetes v1.34.3 on CRI-O 1.34.3 ...
	W1217 20:03:13.633954  661899 node_ready.go:57] node "kindnet-601560" has "Ready":"False" status (will retry)
	W1217 20:03:16.133355  661899 node_ready.go:57] node "kindnet-601560" has "Ready":"False" status (will retry)
	W1217 20:03:14.459986  663785 pod_ready.go:104] pod "coredns-66bc5c9577-wkvhv" is not "Ready", error: <nil>
	W1217 20:03:16.461902  663785 pod_ready.go:104] pod "coredns-66bc5c9577-wkvhv" is not "Ready", error: <nil>
	I1217 20:03:18.359057  670841 cli_runner.go:164] Run: docker network inspect calico-601560 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1217 20:03:18.378394  670841 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1217 20:03:18.382759  670841 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 20:03:18.393903  670841 kubeadm.go:884] updating cluster {Name:calico-601560 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:calico-601560 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1217 20:03:18.394049  670841 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
	I1217 20:03:18.394164  670841 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 20:03:18.429026  670841 crio.go:514] all images are preloaded for cri-o runtime.
	I1217 20:03:18.429049  670841 crio.go:433] Images already preloaded, skipping extraction
	I1217 20:03:18.429122  670841 ssh_runner.go:195] Run: sudo crictl images --output json
	I1217 20:03:18.457449  670841 crio.go:514] all images are preloaded for cri-o runtime.
	I1217 20:03:18.457476  670841 cache_images.go:86] Images are preloaded, skipping loading
	I1217 20:03:18.457488  670841 kubeadm.go:935] updating node { 192.168.94.2 8443 v1.34.3 crio true true} ...
	I1217 20:03:18.457616  670841 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=calico-601560 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.3 ClusterName:calico-601560 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico}
	I1217 20:03:18.457766  670841 ssh_runner.go:195] Run: crio config
	I1217 20:03:18.512976  670841 cni.go:84] Creating CNI manager for "calico"
	I1217 20:03:18.513005  670841 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1217 20:03:18.513030  670841 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.34.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:calico-601560 NodeName:calico-601560 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernet
es/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1217 20:03:18.513243  670841 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "calico-601560"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1217 20:03:18.513329  670841 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.3
	I1217 20:03:18.521923  670841 binaries.go:51] Found k8s binaries, skipping transfer
	I1217 20:03:18.521995  670841 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1217 20:03:18.530138  670841 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1217 20:03:18.543217  670841 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1217 20:03:18.560033  670841 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1217 20:03:18.574304  670841 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1217 20:03:18.578553  670841 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 20:03:18.589114  670841 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 20:03:18.673746  670841 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 20:03:18.698045  670841 certs.go:69] Setting up /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/calico-601560 for IP: 192.168.94.2
	I1217 20:03:18.698070  670841 certs.go:195] generating shared ca certs ...
	I1217 20:03:18.698115  670841 certs.go:227] acquiring lock for ca certs: {Name:mk6c0a4a99609de13fb0b54aca94f9165cc7856c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 20:03:18.698294  670841 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22186-372245/.minikube/ca.key
	I1217 20:03:18.698348  670841 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22186-372245/.minikube/proxy-client-ca.key
	I1217 20:03:18.698362  670841 certs.go:257] generating profile certs ...
	I1217 20:03:18.698430  670841 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/calico-601560/client.key
	I1217 20:03:18.698447  670841 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/calico-601560/client.crt with IP's: []
	I1217 20:03:18.767536  670841 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/calico-601560/client.crt ...
	I1217 20:03:18.767564  670841 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/calico-601560/client.crt: {Name:mk9432b36e502caa91a481e22b2148bdf1b5a0d4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 20:03:18.767747  670841 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/calico-601560/client.key ...
	I1217 20:03:18.767760  670841 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/calico-601560/client.key: {Name:mk0aa051b1af273ba8dda0ecd55fff85b70738d2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 20:03:18.767874  670841 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/calico-601560/apiserver.key.85c9d960
	I1217 20:03:18.767892  670841 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/calico-601560/apiserver.crt.85c9d960 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.94.2]
	I1217 20:03:18.836433  670841 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/calico-601560/apiserver.crt.85c9d960 ...
	I1217 20:03:18.836465  670841 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/calico-601560/apiserver.crt.85c9d960: {Name:mk7c226a215234bbd382f19004717f88577a0952 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 20:03:18.836672  670841 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/calico-601560/apiserver.key.85c9d960 ...
	I1217 20:03:18.836686  670841 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/calico-601560/apiserver.key.85c9d960: {Name:mk874ef5860b4367b6edcbb5951bd272e8d07ee5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 20:03:18.836769  670841 certs.go:382] copying /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/calico-601560/apiserver.crt.85c9d960 -> /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/calico-601560/apiserver.crt
	I1217 20:03:18.836869  670841 certs.go:386] copying /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/calico-601560/apiserver.key.85c9d960 -> /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/calico-601560/apiserver.key
	I1217 20:03:18.836933  670841 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/calico-601560/proxy-client.key
	I1217 20:03:18.836948  670841 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/calico-601560/proxy-client.crt with IP's: []
	I1217 20:03:19.015934  670841 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/calico-601560/proxy-client.crt ...
	I1217 20:03:19.015971  670841 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/calico-601560/proxy-client.crt: {Name:mkbda0282c222c3ee0f68a01da3f3a26249900be Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 20:03:19.016179  670841 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/calico-601560/proxy-client.key ...
	I1217 20:03:19.016202  670841 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/calico-601560/proxy-client.key: {Name:mk9b046a017a0c67c4a76c3103a96847209531f4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 20:03:19.016379  670841 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-372245/.minikube/certs/375797.pem (1338 bytes)
	W1217 20:03:19.016417  670841 certs.go:480] ignoring /home/jenkins/minikube-integration/22186-372245/.minikube/certs/375797_empty.pem, impossibly tiny 0 bytes
	I1217 20:03:19.016427  670841 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-372245/.minikube/certs/ca-key.pem (1675 bytes)
	I1217 20:03:19.016452  670841 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-372245/.minikube/certs/ca.pem (1082 bytes)
	I1217 20:03:19.016478  670841 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-372245/.minikube/certs/cert.pem (1123 bytes)
	I1217 20:03:19.016500  670841 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-372245/.minikube/certs/key.pem (1675 bytes)
	I1217 20:03:19.016537  670841 certs.go:484] found cert: /home/jenkins/minikube-integration/22186-372245/.minikube/files/etc/ssl/certs/3757972.pem (1708 bytes)
	I1217 20:03:19.017148  670841 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1217 20:03:19.037606  670841 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1217 20:03:19.057014  670841 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1217 20:03:19.076665  670841 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1217 20:03:19.096520  670841 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/calico-601560/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1217 20:03:19.115530  670841 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/calico-601560/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1217 20:03:19.134976  670841 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/calico-601560/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1217 20:03:19.154282  670841 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/calico-601560/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1217 20:03:19.173406  670841 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 20:03:19.193624  670841 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/certs/375797.pem --> /usr/share/ca-certificates/375797.pem (1338 bytes)
	I1217 20:03:19.212968  670841 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22186-372245/.minikube/files/etc/ssl/certs/3757972.pem --> /usr/share/ca-certificates/3757972.pem (1708 bytes)
	I1217 20:03:19.231415  670841 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1217 20:03:19.245722  670841 ssh_runner.go:195] Run: openssl version
	I1217 20:03:19.252257  670841 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/375797.pem
	I1217 20:03:19.260388  670841 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/375797.pem /etc/ssl/certs/375797.pem
	I1217 20:03:19.268681  670841 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/375797.pem
	I1217 20:03:19.272908  670841 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 17 19:32 /usr/share/ca-certificates/375797.pem
	I1217 20:03:19.272982  670841 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/375797.pem
	I1217 20:03:19.308399  670841 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1217 20:03:19.317292  670841 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/375797.pem /etc/ssl/certs/51391683.0
	I1217 20:03:19.325519  670841 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/3757972.pem
	I1217 20:03:19.333430  670841 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/3757972.pem /etc/ssl/certs/3757972.pem
	I1217 20:03:19.341556  670841 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3757972.pem
	I1217 20:03:19.345735  670841 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 17 19:32 /usr/share/ca-certificates/3757972.pem
	I1217 20:03:19.345789  670841 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3757972.pem
	I1217 20:03:19.381717  670841 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1217 20:03:19.390332  670841 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/3757972.pem /etc/ssl/certs/3ec20f2e.0
	I1217 20:03:19.399606  670841 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:03:19.407353  670841 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1217 20:03:19.416130  670841 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:03:19.420861  670841 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 19:24 /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:03:19.420926  670841 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1217 20:03:19.460029  670841 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1217 20:03:19.468941  670841 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1217 20:03:19.478531  670841 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1217 20:03:19.482955  670841 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1217 20:03:19.483017  670841 kubeadm.go:401] StartCluster: {Name:calico-601560 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:calico-601560 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Soc
ketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 20:03:19.483150  670841 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1217 20:03:19.483219  670841 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1217 20:03:19.512353  670841 cri.go:89] found id: ""
	I1217 20:03:19.512430  670841 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1217 20:03:19.521235  670841 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1217 20:03:19.529915  670841 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1217 20:03:19.529978  670841 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1217 20:03:19.538113  670841 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1217 20:03:19.538136  670841 kubeadm.go:158] found existing configuration files:
	
	I1217 20:03:19.538192  670841 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1217 20:03:19.546158  670841 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1217 20:03:19.546227  670841 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1217 20:03:19.554339  670841 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1217 20:03:19.562696  670841 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1217 20:03:19.562756  670841 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1217 20:03:19.570697  670841 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1217 20:03:19.578950  670841 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1217 20:03:19.579017  670841 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1217 20:03:19.589583  670841 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1217 20:03:19.598955  670841 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1217 20:03:19.599007  670841 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1217 20:03:19.608025  670841 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1217 20:03:19.655407  670841 kubeadm.go:319] [init] Using Kubernetes version: v1.34.3
	I1217 20:03:19.655500  670841 kubeadm.go:319] [preflight] Running pre-flight checks
	I1217 20:03:19.677239  670841 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1217 20:03:19.677303  670841 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1045-gcp
	I1217 20:03:19.677335  670841 kubeadm.go:319] OS: Linux
	I1217 20:03:19.677386  670841 kubeadm.go:319] CGROUPS_CPU: enabled
	I1217 20:03:19.677444  670841 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1217 20:03:19.677569  670841 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1217 20:03:19.677632  670841 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1217 20:03:19.677674  670841 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1217 20:03:19.677715  670841 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1217 20:03:19.677773  670841 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1217 20:03:19.677816  670841 kubeadm.go:319] CGROUPS_IO: enabled
	I1217 20:03:19.741105  670841 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1217 20:03:19.741272  670841 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1217 20:03:19.741546  670841 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1217 20:03:19.750023  670841 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1217 20:03:19.753242  670841 out.go:252]   - Generating certificates and keys ...
	I1217 20:03:19.753357  670841 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1217 20:03:19.753456  670841 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1217 20:03:20.239110  670841 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1217 20:03:20.608276  670841 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1217 20:03:21.218915  670841 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1217 20:03:21.525425  670841 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1217 20:03:21.851767  670841 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1217 20:03:21.851961  670841 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [calico-601560 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1217 20:03:21.939356  670841 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1217 20:03:21.939473  670841 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [calico-601560 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1217 20:03:22.307543  670841 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1217 20:03:22.421761  670841 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1217 20:03:22.510564  670841 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1217 20:03:22.510665  670841 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1217 20:03:22.609012  670841 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1217 20:03:22.646737  670841 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1217 20:03:22.879851  670841 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1217 20:03:23.141326  670841 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1217 20:03:23.238009  670841 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1217 20:03:23.238550  670841 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1217 20:03:23.242668  670841 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	W1217 20:03:18.633906  661899 node_ready.go:57] node "kindnet-601560" has "Ready":"False" status (will retry)
	W1217 20:03:21.133478  661899 node_ready.go:57] node "kindnet-601560" has "Ready":"False" status (will retry)
	W1217 20:03:18.959478  663785 pod_ready.go:104] pod "coredns-66bc5c9577-wkvhv" is not "Ready", error: <nil>
	W1217 20:03:20.960520  663785 pod_ready.go:104] pod "coredns-66bc5c9577-wkvhv" is not "Ready", error: <nil>
	W1217 20:03:22.961318  663785 pod_ready.go:104] pod "coredns-66bc5c9577-wkvhv" is not "Ready", error: <nil>
	I1217 20:03:23.244170  670841 out.go:252]   - Booting up control plane ...
	I1217 20:03:23.244253  670841 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1217 20:03:23.244346  670841 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1217 20:03:23.245131  670841 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1217 20:03:23.259283  670841 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1217 20:03:23.259417  670841 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1217 20:03:23.266344  670841 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1217 20:03:23.266700  670841 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1217 20:03:23.266748  670841 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1217 20:03:23.373780  670841 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1217 20:03:23.373931  670841 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1217 20:03:23.875663  670841 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 502.022631ms
	I1217 20:03:23.879701  670841 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1217 20:03:23.879851  670841 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.94.2:8443/livez
	I1217 20:03:23.880003  670841 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1217 20:03:23.880154  670841 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1217 20:03:25.939785  670841 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.060056721s
	I1217 20:03:26.154205  670841 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.274445016s
	I1217 20:03:23.632909  661899 node_ready.go:49] node "kindnet-601560" is "Ready"
	I1217 20:03:23.632943  661899 node_ready.go:38] duration metric: took 14.503161667s for node "kindnet-601560" to be "Ready" ...
	I1217 20:03:23.632969  661899 api_server.go:52] waiting for apiserver process to appear ...
	I1217 20:03:23.633033  661899 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:03:23.646556  661899 api_server.go:72] duration metric: took 14.857848207s to wait for apiserver process to appear ...
	I1217 20:03:23.646585  661899 api_server.go:88] waiting for apiserver healthz status ...
	I1217 20:03:23.646607  661899 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1217 20:03:23.651302  661899 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1217 20:03:23.652378  661899 api_server.go:141] control plane version: v1.34.3
	I1217 20:03:23.652406  661899 api_server.go:131] duration metric: took 5.812904ms to wait for apiserver health ...
	I1217 20:03:23.652424  661899 system_pods.go:43] waiting for kube-system pods to appear ...
	I1217 20:03:23.656141  661899 system_pods.go:59] 8 kube-system pods found
	I1217 20:03:23.656172  661899 system_pods.go:61] "coredns-66bc5c9577-8jj68" [53709eea-dcb0-4d6f-a32e-32a9f2de468b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 20:03:23.656177  661899 system_pods.go:61] "etcd-kindnet-601560" [eacdb7f0-ce90-4edb-af21-889ecfc65870] Running
	I1217 20:03:23.656183  661899 system_pods.go:61] "kindnet-mfmbc" [ceb0146a-e10e-4b22-a499-9bf6b194b9ec] Running
	I1217 20:03:23.656187  661899 system_pods.go:61] "kube-apiserver-kindnet-601560" [d6c0fb58-d7a1-4f0a-b3ed-576f5b0ca96c] Running
	I1217 20:03:23.656197  661899 system_pods.go:61] "kube-controller-manager-kindnet-601560" [95a0b8e8-e4bf-4335-a52c-90694161fdad] Running
	I1217 20:03:23.656203  661899 system_pods.go:61] "kube-proxy-bskt5" [96940e08-5a37-4ac7-821a-fd8a448cc3df] Running
	I1217 20:03:23.656208  661899 system_pods.go:61] "kube-scheduler-kindnet-601560" [a11357f9-84d1-431c-9218-9f6845d307a7] Running
	I1217 20:03:23.656216  661899 system_pods.go:61] "storage-provisioner" [fc116921-19d7-4900-b5ff-8ab0fab65ffa] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1217 20:03:23.656232  661899 system_pods.go:74] duration metric: took 3.80108ms to wait for pod list to return data ...
	I1217 20:03:23.656253  661899 default_sa.go:34] waiting for default service account to be created ...
	I1217 20:03:23.658725  661899 default_sa.go:45] found service account: "default"
	I1217 20:03:23.658745  661899 default_sa.go:55] duration metric: took 2.483109ms for default service account to be created ...
	I1217 20:03:23.658771  661899 system_pods.go:116] waiting for k8s-apps to be running ...
	I1217 20:03:23.662172  661899 system_pods.go:86] 8 kube-system pods found
	I1217 20:03:23.662213  661899 system_pods.go:89] "coredns-66bc5c9577-8jj68" [53709eea-dcb0-4d6f-a32e-32a9f2de468b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 20:03:23.662220  661899 system_pods.go:89] "etcd-kindnet-601560" [eacdb7f0-ce90-4edb-af21-889ecfc65870] Running
	I1217 20:03:23.662228  661899 system_pods.go:89] "kindnet-mfmbc" [ceb0146a-e10e-4b22-a499-9bf6b194b9ec] Running
	I1217 20:03:23.662234  661899 system_pods.go:89] "kube-apiserver-kindnet-601560" [d6c0fb58-d7a1-4f0a-b3ed-576f5b0ca96c] Running
	I1217 20:03:23.662240  661899 system_pods.go:89] "kube-controller-manager-kindnet-601560" [95a0b8e8-e4bf-4335-a52c-90694161fdad] Running
	I1217 20:03:23.662245  661899 system_pods.go:89] "kube-proxy-bskt5" [96940e08-5a37-4ac7-821a-fd8a448cc3df] Running
	I1217 20:03:23.662251  661899 system_pods.go:89] "kube-scheduler-kindnet-601560" [a11357f9-84d1-431c-9218-9f6845d307a7] Running
	I1217 20:03:23.662258  661899 system_pods.go:89] "storage-provisioner" [fc116921-19d7-4900-b5ff-8ab0fab65ffa] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1217 20:03:23.662296  661899 retry.go:31] will retry after 256.842456ms: missing components: kube-dns
	I1217 20:03:23.924310  661899 system_pods.go:86] 8 kube-system pods found
	I1217 20:03:23.924362  661899 system_pods.go:89] "coredns-66bc5c9577-8jj68" [53709eea-dcb0-4d6f-a32e-32a9f2de468b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 20:03:23.924373  661899 system_pods.go:89] "etcd-kindnet-601560" [eacdb7f0-ce90-4edb-af21-889ecfc65870] Running
	I1217 20:03:23.924382  661899 system_pods.go:89] "kindnet-mfmbc" [ceb0146a-e10e-4b22-a499-9bf6b194b9ec] Running
	I1217 20:03:23.924391  661899 system_pods.go:89] "kube-apiserver-kindnet-601560" [d6c0fb58-d7a1-4f0a-b3ed-576f5b0ca96c] Running
	I1217 20:03:23.924396  661899 system_pods.go:89] "kube-controller-manager-kindnet-601560" [95a0b8e8-e4bf-4335-a52c-90694161fdad] Running
	I1217 20:03:23.924482  661899 system_pods.go:89] "kube-proxy-bskt5" [96940e08-5a37-4ac7-821a-fd8a448cc3df] Running
	I1217 20:03:23.924487  661899 system_pods.go:89] "kube-scheduler-kindnet-601560" [a11357f9-84d1-431c-9218-9f6845d307a7] Running
	I1217 20:03:23.924494  661899 system_pods.go:89] "storage-provisioner" [fc116921-19d7-4900-b5ff-8ab0fab65ffa] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1217 20:03:23.924518  661899 retry.go:31] will retry after 245.393606ms: missing components: kube-dns
	I1217 20:03:24.174541  661899 system_pods.go:86] 8 kube-system pods found
	I1217 20:03:24.174575  661899 system_pods.go:89] "coredns-66bc5c9577-8jj68" [53709eea-dcb0-4d6f-a32e-32a9f2de468b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 20:03:24.174581  661899 system_pods.go:89] "etcd-kindnet-601560" [eacdb7f0-ce90-4edb-af21-889ecfc65870] Running
	I1217 20:03:24.174588  661899 system_pods.go:89] "kindnet-mfmbc" [ceb0146a-e10e-4b22-a499-9bf6b194b9ec] Running
	I1217 20:03:24.174592  661899 system_pods.go:89] "kube-apiserver-kindnet-601560" [d6c0fb58-d7a1-4f0a-b3ed-576f5b0ca96c] Running
	I1217 20:03:24.174595  661899 system_pods.go:89] "kube-controller-manager-kindnet-601560" [95a0b8e8-e4bf-4335-a52c-90694161fdad] Running
	I1217 20:03:24.174604  661899 system_pods.go:89] "kube-proxy-bskt5" [96940e08-5a37-4ac7-821a-fd8a448cc3df] Running
	I1217 20:03:24.174607  661899 system_pods.go:89] "kube-scheduler-kindnet-601560" [a11357f9-84d1-431c-9218-9f6845d307a7] Running
	I1217 20:03:24.174612  661899 system_pods.go:89] "storage-provisioner" [fc116921-19d7-4900-b5ff-8ab0fab65ffa] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1217 20:03:24.174627  661899 retry.go:31] will retry after 471.649962ms: missing components: kube-dns
	I1217 20:03:24.653747  661899 system_pods.go:86] 8 kube-system pods found
	I1217 20:03:24.653810  661899 system_pods.go:89] "coredns-66bc5c9577-8jj68" [53709eea-dcb0-4d6f-a32e-32a9f2de468b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 20:03:24.653821  661899 system_pods.go:89] "etcd-kindnet-601560" [eacdb7f0-ce90-4edb-af21-889ecfc65870] Running
	I1217 20:03:24.653832  661899 system_pods.go:89] "kindnet-mfmbc" [ceb0146a-e10e-4b22-a499-9bf6b194b9ec] Running
	I1217 20:03:24.653840  661899 system_pods.go:89] "kube-apiserver-kindnet-601560" [d6c0fb58-d7a1-4f0a-b3ed-576f5b0ca96c] Running
	I1217 20:03:24.653854  661899 system_pods.go:89] "kube-controller-manager-kindnet-601560" [95a0b8e8-e4bf-4335-a52c-90694161fdad] Running
	I1217 20:03:24.653865  661899 system_pods.go:89] "kube-proxy-bskt5" [96940e08-5a37-4ac7-821a-fd8a448cc3df] Running
	I1217 20:03:24.653873  661899 system_pods.go:89] "kube-scheduler-kindnet-601560" [a11357f9-84d1-431c-9218-9f6845d307a7] Running
	I1217 20:03:24.653883  661899 system_pods.go:89] "storage-provisioner" [fc116921-19d7-4900-b5ff-8ab0fab65ffa] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1217 20:03:24.653914  661899 retry.go:31] will retry after 610.069127ms: missing components: kube-dns
	I1217 20:03:25.269066  661899 system_pods.go:86] 8 kube-system pods found
	I1217 20:03:25.269118  661899 system_pods.go:89] "coredns-66bc5c9577-8jj68" [53709eea-dcb0-4d6f-a32e-32a9f2de468b] Running
	I1217 20:03:25.269127  661899 system_pods.go:89] "etcd-kindnet-601560" [eacdb7f0-ce90-4edb-af21-889ecfc65870] Running
	I1217 20:03:25.269133  661899 system_pods.go:89] "kindnet-mfmbc" [ceb0146a-e10e-4b22-a499-9bf6b194b9ec] Running
	I1217 20:03:25.269138  661899 system_pods.go:89] "kube-apiserver-kindnet-601560" [d6c0fb58-d7a1-4f0a-b3ed-576f5b0ca96c] Running
	I1217 20:03:25.269144  661899 system_pods.go:89] "kube-controller-manager-kindnet-601560" [95a0b8e8-e4bf-4335-a52c-90694161fdad] Running
	I1217 20:03:25.269151  661899 system_pods.go:89] "kube-proxy-bskt5" [96940e08-5a37-4ac7-821a-fd8a448cc3df] Running
	I1217 20:03:25.269157  661899 system_pods.go:89] "kube-scheduler-kindnet-601560" [a11357f9-84d1-431c-9218-9f6845d307a7] Running
	I1217 20:03:25.269162  661899 system_pods.go:89] "storage-provisioner" [fc116921-19d7-4900-b5ff-8ab0fab65ffa] Running
	I1217 20:03:25.269173  661899 system_pods.go:126] duration metric: took 1.610390788s to wait for k8s-apps to be running ...
	I1217 20:03:25.269190  661899 system_svc.go:44] waiting for kubelet service to be running ....
	I1217 20:03:25.269306  661899 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 20:03:25.287846  661899 system_svc.go:56] duration metric: took 18.644374ms WaitForService to wait for kubelet
	I1217 20:03:25.287879  661899 kubeadm.go:587] duration metric: took 16.499179711s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1217 20:03:25.287990  661899 node_conditions.go:102] verifying NodePressure condition ...
	I1217 20:03:25.291437  661899 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1217 20:03:25.291478  661899 node_conditions.go:123] node cpu capacity is 8
	I1217 20:03:25.291507  661899 node_conditions.go:105] duration metric: took 3.509904ms to run NodePressure ...
	I1217 20:03:25.291524  661899 start.go:242] waiting for startup goroutines ...
	I1217 20:03:25.291537  661899 start.go:247] waiting for cluster config update ...
	I1217 20:03:25.291553  661899 start.go:256] writing updated cluster config ...
	I1217 20:03:25.291932  661899 ssh_runner.go:195] Run: rm -f paused
	I1217 20:03:25.297369  661899 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1217 20:03:25.301961  661899 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-8jj68" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:03:25.307475  661899 pod_ready.go:94] pod "coredns-66bc5c9577-8jj68" is "Ready"
	I1217 20:03:25.308469  661899 pod_ready.go:86] duration metric: took 6.47552ms for pod "coredns-66bc5c9577-8jj68" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:03:25.311036  661899 pod_ready.go:83] waiting for pod "etcd-kindnet-601560" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:03:25.316557  661899 pod_ready.go:94] pod "etcd-kindnet-601560" is "Ready"
	I1217 20:03:25.316589  661899 pod_ready.go:86] duration metric: took 5.529956ms for pod "etcd-kindnet-601560" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:03:25.319049  661899 pod_ready.go:83] waiting for pod "kube-apiserver-kindnet-601560" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:03:25.324345  661899 pod_ready.go:94] pod "kube-apiserver-kindnet-601560" is "Ready"
	I1217 20:03:25.324376  661899 pod_ready.go:86] duration metric: took 5.29554ms for pod "kube-apiserver-kindnet-601560" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:03:25.327429  661899 pod_ready.go:83] waiting for pod "kube-controller-manager-kindnet-601560" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:03:25.707165  661899 pod_ready.go:94] pod "kube-controller-manager-kindnet-601560" is "Ready"
	I1217 20:03:25.707196  661899 pod_ready.go:86] duration metric: took 379.735745ms for pod "kube-controller-manager-kindnet-601560" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:03:25.904301  661899 pod_ready.go:83] waiting for pod "kube-proxy-bskt5" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:03:26.302475  661899 pod_ready.go:94] pod "kube-proxy-bskt5" is "Ready"
	I1217 20:03:26.302513  661899 pod_ready.go:86] duration metric: took 398.177069ms for pod "kube-proxy-bskt5" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:03:26.503007  661899 pod_ready.go:83] waiting for pod "kube-scheduler-kindnet-601560" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:03:26.902326  661899 pod_ready.go:94] pod "kube-scheduler-kindnet-601560" is "Ready"
	I1217 20:03:26.902360  661899 pod_ready.go:86] duration metric: took 399.326567ms for pod "kube-scheduler-kindnet-601560" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:03:26.902375  661899 pod_ready.go:40] duration metric: took 1.604953329s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1217 20:03:26.947991  661899 start.go:625] kubectl: 1.35.0, cluster: 1.34.3 (minor skew: 1)
	I1217 20:03:26.950057  661899 out.go:179] * Done! kubectl is now configured to use "kindnet-601560" cluster and "default" namespace by default
	W1217 20:03:25.460486  663785 pod_ready.go:104] pod "coredns-66bc5c9577-wkvhv" is not "Ready", error: <nil>
	I1217 20:03:25.959552  663785 pod_ready.go:94] pod "coredns-66bc5c9577-wkvhv" is "Ready"
	I1217 20:03:25.959580  663785 pod_ready.go:86] duration metric: took 31.00625432s for pod "coredns-66bc5c9577-wkvhv" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:03:25.961836  663785 pod_ready.go:83] waiting for pod "etcd-embed-certs-147021" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:03:25.966292  663785 pod_ready.go:94] pod "etcd-embed-certs-147021" is "Ready"
	I1217 20:03:25.966315  663785 pod_ready.go:86] duration metric: took 4.451271ms for pod "etcd-embed-certs-147021" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:03:25.968353  663785 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-147021" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:03:25.972371  663785 pod_ready.go:94] pod "kube-apiserver-embed-certs-147021" is "Ready"
	I1217 20:03:25.972392  663785 pod_ready.go:86] duration metric: took 4.014818ms for pod "kube-apiserver-embed-certs-147021" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:03:25.974517  663785 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-147021" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:03:26.160111  663785 pod_ready.go:94] pod "kube-controller-manager-embed-certs-147021" is "Ready"
	I1217 20:03:26.160146  663785 pod_ready.go:86] duration metric: took 185.603925ms for pod "kube-controller-manager-embed-certs-147021" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:03:26.357699  663785 pod_ready.go:83] waiting for pod "kube-proxy-nwn9n" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:03:26.757915  663785 pod_ready.go:94] pod "kube-proxy-nwn9n" is "Ready"
	I1217 20:03:26.757951  663785 pod_ready.go:86] duration metric: took 400.221364ms for pod "kube-proxy-nwn9n" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:03:26.958004  663785 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-147021" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:03:27.357953  663785 pod_ready.go:94] pod "kube-scheduler-embed-certs-147021" is "Ready"
	I1217 20:03:27.357989  663785 pod_ready.go:86] duration metric: took 399.952058ms for pod "kube-scheduler-embed-certs-147021" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 20:03:27.358010  663785 pod_ready.go:40] duration metric: took 32.409077301s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1217 20:03:27.412909  663785 start.go:625] kubectl: 1.35.0, cluster: 1.34.3 (minor skew: 1)
	I1217 20:03:27.415038  663785 out.go:179] * Done! kubectl is now configured to use "embed-certs-147021" cluster and "default" namespace by default
	I1217 20:03:27.882278  670841 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.002438831s
	I1217 20:03:27.899395  670841 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1217 20:03:27.910240  670841 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1217 20:03:27.918636  670841 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1217 20:03:27.918906  670841 kubeadm.go:319] [mark-control-plane] Marking the node calico-601560 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1217 20:03:27.927196  670841 kubeadm.go:319] [bootstrap-token] Using token: 2aft5u.weq4vsf1xmievkcr
	I1217 20:03:27.928731  670841 out.go:252]   - Configuring RBAC rules ...
	I1217 20:03:27.928880  670841 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1217 20:03:27.933567  670841 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1217 20:03:27.939106  670841 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1217 20:03:27.941823  670841 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1217 20:03:27.944845  670841 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1217 20:03:27.948818  670841 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1217 20:03:28.288153  670841 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1217 20:03:28.708604  670841 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1217 20:03:29.289518  670841 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1217 20:03:29.290969  670841 kubeadm.go:319] 
	I1217 20:03:29.291067  670841 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1217 20:03:29.291094  670841 kubeadm.go:319] 
	I1217 20:03:29.291216  670841 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1217 20:03:29.291227  670841 kubeadm.go:319] 
	I1217 20:03:29.291262  670841 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1217 20:03:29.291355  670841 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1217 20:03:29.291412  670841 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1217 20:03:29.291417  670841 kubeadm.go:319] 
	I1217 20:03:29.291485  670841 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1217 20:03:29.291495  670841 kubeadm.go:319] 
	I1217 20:03:29.291655  670841 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1217 20:03:29.291680  670841 kubeadm.go:319] 
	I1217 20:03:29.291752  670841 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1217 20:03:29.291876  670841 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1217 20:03:29.291966  670841 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1217 20:03:29.291974  670841 kubeadm.go:319] 
	I1217 20:03:29.292141  670841 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1217 20:03:29.292255  670841 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1217 20:03:29.292265  670841 kubeadm.go:319] 
	I1217 20:03:29.292368  670841 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 2aft5u.weq4vsf1xmievkcr \
	I1217 20:03:29.292485  670841 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:8ef867ecc15c7bd9eb9f87ba84e4b5e1f9c90bbe1fbebab60bd7b5b08cd9129f \
	I1217 20:03:29.292528  670841 kubeadm.go:319] 	--control-plane 
	I1217 20:03:29.292547  670841 kubeadm.go:319] 
	I1217 20:03:29.292662  670841 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1217 20:03:29.292673  670841 kubeadm.go:319] 
	I1217 20:03:29.292781  670841 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 2aft5u.weq4vsf1xmievkcr \
	I1217 20:03:29.292943  670841 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:8ef867ecc15c7bd9eb9f87ba84e4b5e1f9c90bbe1fbebab60bd7b5b08cd9129f 
	I1217 20:03:29.295607  670841 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1045-gcp\n", err: exit status 1
	I1217 20:03:29.295717  670841 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1217 20:03:29.295751  670841 cni.go:84] Creating CNI manager for "calico"
	I1217 20:03:29.298347  670841 out.go:179] * Configuring Calico (Container Networking Interface) ...
	I1217 20:03:29.299707  670841 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.3/kubectl ...
	I1217 20:03:29.299731  670841 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (329943 bytes)
	I1217 20:03:29.315678  670841 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1217 20:03:30.131447  670841 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1217 20:03:30.131524  670841 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 20:03:30.131571  670841 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes calico-601560 minikube.k8s.io/updated_at=2025_12_17T20_03_30_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=2e96f676eb7e96389e85fe0658a4ede4c4ba6924 minikube.k8s.io/name=calico-601560 minikube.k8s.io/primary=true
	I1217 20:03:30.218315  670841 ops.go:34] apiserver oom_adj: -16
	I1217 20:03:30.218328  670841 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 20:03:30.718470  670841 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 20:03:31.219276  670841 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 20:03:31.719273  670841 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 20:03:32.218557  670841 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 20:03:32.718913  670841 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 20:03:33.219105  670841 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 20:03:33.718420  670841 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 20:03:34.218969  670841 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.3/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1217 20:03:34.351057  670841 kubeadm.go:1114] duration metric: took 4.219598036s to wait for elevateKubeSystemPrivileges
	I1217 20:03:34.351188  670841 kubeadm.go:403] duration metric: took 14.868171074s to StartCluster
	I1217 20:03:34.351285  670841 settings.go:142] acquiring lock: {Name:mk01c60672ff2b8f50b037d6096a0a4590636830 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 20:03:34.351387  670841 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22186-372245/kubeconfig
	I1217 20:03:34.355527  670841 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22186-372245/kubeconfig: {Name:mkbe8926b9014d2af611aee93b1188b72880b6c1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 20:03:34.356168  670841 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1217 20:03:34.356791  670841 config.go:182] Loaded profile config "calico-601560": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 20:03:34.356919  670841 start.go:236] Will wait 15m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1217 20:03:34.356946  670841 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1217 20:03:34.357552  670841 addons.go:70] Setting storage-provisioner=true in profile "calico-601560"
	I1217 20:03:34.357589  670841 addons.go:239] Setting addon storage-provisioner=true in "calico-601560"
	I1217 20:03:34.357634  670841 addons.go:70] Setting default-storageclass=true in profile "calico-601560"
	I1217 20:03:34.357661  670841 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "calico-601560"
	I1217 20:03:34.358038  670841 cli_runner.go:164] Run: docker container inspect calico-601560 --format={{.State.Status}}
	I1217 20:03:34.357640  670841 host.go:66] Checking if "calico-601560" exists ...
	I1217 20:03:34.358631  670841 cli_runner.go:164] Run: docker container inspect calico-601560 --format={{.State.Status}}
	I1217 20:03:34.360358  670841 out.go:179] * Verifying Kubernetes components...
	I1217 20:03:34.361516  670841 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 20:03:34.391461  670841 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1217 20:03:34.392909  670841 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 20:03:34.392936  670841 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1217 20:03:34.393004  670841 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-601560
	I1217 20:03:34.398934  670841 addons.go:239] Setting addon default-storageclass=true in "calico-601560"
	I1217 20:03:34.398980  670841 host.go:66] Checking if "calico-601560" exists ...
	I1217 20:03:34.399587  670841 cli_runner.go:164] Run: docker container inspect calico-601560 --format={{.State.Status}}
	I1217 20:03:34.442958  670841 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33493 SSHKeyPath:/home/jenkins/minikube-integration/22186-372245/.minikube/machines/calico-601560/id_rsa Username:docker}
	I1217 20:03:34.448994  670841 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1217 20:03:34.449027  670841 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1217 20:03:34.449314  670841 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-601560
	I1217 20:03:34.486757  670841 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33493 SSHKeyPath:/home/jenkins/minikube-integration/22186-372245/.minikube/machines/calico-601560/id_rsa Username:docker}
	I1217 20:03:34.586402  670841 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.94.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1217 20:03:34.663309  670841 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 20:03:34.668385  670841 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 20:03:34.688682  670841 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1217 20:03:34.941746  670841 start.go:977] {"host.minikube.internal": 192.168.94.1} host record injected into CoreDNS's ConfigMap
	I1217 20:03:34.945927  670841 node_ready.go:35] waiting up to 15m0s for node "calico-601560" to be "Ready" ...
	I1217 20:03:35.250259  670841 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1217 20:03:35.251908  670841 addons.go:530] duration metric: took 894.919701ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1217 20:03:35.448730  670841 kapi.go:214] "coredns" deployment in "kube-system" namespace and "calico-601560" context rescaled to 1 replicas
	W1217 20:03:36.949992  670841 node_ready.go:57] node "calico-601560" has "Ready":"False" status (will retry)
	I1217 20:03:38.949836  670841 node_ready.go:49] node "calico-601560" is "Ready"
	I1217 20:03:38.949864  670841 node_ready.go:38] duration metric: took 4.003891228s for node "calico-601560" to be "Ready" ...
	I1217 20:03:38.949883  670841 api_server.go:52] waiting for apiserver process to appear ...
	I1217 20:03:38.949932  670841 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 20:03:38.963343  670841 api_server.go:72] duration metric: took 4.605966431s to wait for apiserver process to appear ...
	I1217 20:03:38.963370  670841 api_server.go:88] waiting for apiserver healthz status ...
	I1217 20:03:38.963392  670841 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1217 20:03:38.969266  670841 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1217 20:03:38.970382  670841 api_server.go:141] control plane version: v1.34.3
	I1217 20:03:38.970416  670841 api_server.go:131] duration metric: took 7.037247ms to wait for apiserver health ...
	I1217 20:03:38.970428  670841 system_pods.go:43] waiting for kube-system pods to appear ...
	I1217 20:03:38.975014  670841 system_pods.go:59] 9 kube-system pods found
	I1217 20:03:38.975060  670841 system_pods.go:61] "calico-kube-controllers-5c676f698c-87rdt" [8344584d-c9d9-4d60-b9d6-8969b818dd96] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1217 20:03:38.975107  670841 system_pods.go:61] "calico-node-txfvq" [646b819d-dbb3-4aab-a10f-da140ba4c46c] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [ebpf-bootstrap]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1217 20:03:38.975121  670841 system_pods.go:61] "coredns-66bc5c9577-6zhb9" [396a593f-1e88-4649-9aac-1020901108e0] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 20:03:38.975127  670841 system_pods.go:61] "etcd-calico-601560" [a413bf3c-3090-4a23-888e-4172181ffdbc] Running
	I1217 20:03:38.975163  670841 system_pods.go:61] "kube-apiserver-calico-601560" [3802348e-f912-481e-bff3-0488a25434e5] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1217 20:03:38.975180  670841 system_pods.go:61] "kube-controller-manager-calico-601560" [02ef2474-6cde-416e-b6de-4b6d9922998d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1217 20:03:38.975218  670841 system_pods.go:61] "kube-proxy-l6w9t" [fd770e17-7fb8-432d-a348-d61c6f958e31] Running
	I1217 20:03:38.975233  670841 system_pods.go:61] "kube-scheduler-calico-601560" [70a6d486-c221-4c28-8726-6daf4871c8ab] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1217 20:03:38.975240  670841 system_pods.go:61] "storage-provisioner" [dceef9ee-88ec-4dec-9567-57006e2f8327] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1217 20:03:38.975248  670841 system_pods.go:74] duration metric: took 4.813123ms to wait for pod list to return data ...
	I1217 20:03:38.975259  670841 default_sa.go:34] waiting for default service account to be created ...
	I1217 20:03:38.979364  670841 default_sa.go:45] found service account: "default"
	I1217 20:03:38.979398  670841 default_sa.go:55] duration metric: took 4.131056ms for default service account to be created ...
	I1217 20:03:38.979413  670841 system_pods.go:116] waiting for k8s-apps to be running ...
	I1217 20:03:38.985950  670841 system_pods.go:86] 9 kube-system pods found
	I1217 20:03:38.986005  670841 system_pods.go:89] "calico-kube-controllers-5c676f698c-87rdt" [8344584d-c9d9-4d60-b9d6-8969b818dd96] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1217 20:03:38.986018  670841 system_pods.go:89] "calico-node-txfvq" [646b819d-dbb3-4aab-a10f-da140ba4c46c] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [ebpf-bootstrap]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1217 20:03:38.986034  670841 system_pods.go:89] "coredns-66bc5c9577-6zhb9" [396a593f-1e88-4649-9aac-1020901108e0] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 20:03:38.986040  670841 system_pods.go:89] "etcd-calico-601560" [a413bf3c-3090-4a23-888e-4172181ffdbc] Running
	I1217 20:03:38.986048  670841 system_pods.go:89] "kube-apiserver-calico-601560" [3802348e-f912-481e-bff3-0488a25434e5] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1217 20:03:38.986063  670841 system_pods.go:89] "kube-controller-manager-calico-601560" [02ef2474-6cde-416e-b6de-4b6d9922998d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1217 20:03:38.986070  670841 system_pods.go:89] "kube-proxy-l6w9t" [fd770e17-7fb8-432d-a348-d61c6f958e31] Running
	I1217 20:03:38.986106  670841 system_pods.go:89] "kube-scheduler-calico-601560" [70a6d486-c221-4c28-8726-6daf4871c8ab] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1217 20:03:38.986115  670841 system_pods.go:89] "storage-provisioner" [dceef9ee-88ec-4dec-9567-57006e2f8327] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1217 20:03:38.986153  670841 retry.go:31] will retry after 298.918033ms: missing components: kube-dns
	I1217 20:03:39.298711  670841 system_pods.go:86] 9 kube-system pods found
	I1217 20:03:39.298772  670841 system_pods.go:89] "calico-kube-controllers-5c676f698c-87rdt" [8344584d-c9d9-4d60-b9d6-8969b818dd96] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1217 20:03:39.298787  670841 system_pods.go:89] "calico-node-txfvq" [646b819d-dbb3-4aab-a10f-da140ba4c46c] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [ebpf-bootstrap]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1217 20:03:39.298799  670841 system_pods.go:89] "coredns-66bc5c9577-6zhb9" [396a593f-1e88-4649-9aac-1020901108e0] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 20:03:39.298807  670841 system_pods.go:89] "etcd-calico-601560" [a413bf3c-3090-4a23-888e-4172181ffdbc] Running
	I1217 20:03:39.298817  670841 system_pods.go:89] "kube-apiserver-calico-601560" [3802348e-f912-481e-bff3-0488a25434e5] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1217 20:03:39.298827  670841 system_pods.go:89] "kube-controller-manager-calico-601560" [02ef2474-6cde-416e-b6de-4b6d9922998d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1217 20:03:39.298834  670841 system_pods.go:89] "kube-proxy-l6w9t" [fd770e17-7fb8-432d-a348-d61c6f958e31] Running
	I1217 20:03:39.298842  670841 system_pods.go:89] "kube-scheduler-calico-601560" [70a6d486-c221-4c28-8726-6daf4871c8ab] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1217 20:03:39.298850  670841 system_pods.go:89] "storage-provisioner" [dceef9ee-88ec-4dec-9567-57006e2f8327] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1217 20:03:39.298883  670841 retry.go:31] will retry after 285.981632ms: missing components: kube-dns
	I1217 20:03:39.592693  670841 system_pods.go:86] 9 kube-system pods found
	I1217 20:03:39.592738  670841 system_pods.go:89] "calico-kube-controllers-5c676f698c-87rdt" [8344584d-c9d9-4d60-b9d6-8969b818dd96] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1217 20:03:39.592751  670841 system_pods.go:89] "calico-node-txfvq" [646b819d-dbb3-4aab-a10f-da140ba4c46c] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [ebpf-bootstrap]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1217 20:03:39.592762  670841 system_pods.go:89] "coredns-66bc5c9577-6zhb9" [396a593f-1e88-4649-9aac-1020901108e0] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 20:03:39.592769  670841 system_pods.go:89] "etcd-calico-601560" [a413bf3c-3090-4a23-888e-4172181ffdbc] Running
	I1217 20:03:39.592779  670841 system_pods.go:89] "kube-apiserver-calico-601560" [3802348e-f912-481e-bff3-0488a25434e5] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1217 20:03:39.592787  670841 system_pods.go:89] "kube-controller-manager-calico-601560" [02ef2474-6cde-416e-b6de-4b6d9922998d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1217 20:03:39.592809  670841 system_pods.go:89] "kube-proxy-l6w9t" [fd770e17-7fb8-432d-a348-d61c6f958e31] Running
	I1217 20:03:39.592822  670841 system_pods.go:89] "kube-scheduler-calico-601560" [70a6d486-c221-4c28-8726-6daf4871c8ab] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1217 20:03:39.592830  670841 system_pods.go:89] "storage-provisioner" [dceef9ee-88ec-4dec-9567-57006e2f8327] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1217 20:03:39.592854  670841 retry.go:31] will retry after 333.324304ms: missing components: kube-dns
	I1217 20:03:39.941982  670841 system_pods.go:86] 9 kube-system pods found
	I1217 20:03:39.942027  670841 system_pods.go:89] "calico-kube-controllers-5c676f698c-87rdt" [8344584d-c9d9-4d60-b9d6-8969b818dd96] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1217 20:03:39.942040  670841 system_pods.go:89] "calico-node-txfvq" [646b819d-dbb3-4aab-a10f-da140ba4c46c] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [ebpf-bootstrap]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1217 20:03:39.942050  670841 system_pods.go:89] "coredns-66bc5c9577-6zhb9" [396a593f-1e88-4649-9aac-1020901108e0] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 20:03:39.942057  670841 system_pods.go:89] "etcd-calico-601560" [a413bf3c-3090-4a23-888e-4172181ffdbc] Running
	I1217 20:03:39.942066  670841 system_pods.go:89] "kube-apiserver-calico-601560" [3802348e-f912-481e-bff3-0488a25434e5] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1217 20:03:39.942088  670841 system_pods.go:89] "kube-controller-manager-calico-601560" [02ef2474-6cde-416e-b6de-4b6d9922998d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1217 20:03:39.942095  670841 system_pods.go:89] "kube-proxy-l6w9t" [fd770e17-7fb8-432d-a348-d61c6f958e31] Running
	I1217 20:03:39.942103  670841 system_pods.go:89] "kube-scheduler-calico-601560" [70a6d486-c221-4c28-8726-6daf4871c8ab] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1217 20:03:39.942108  670841 system_pods.go:89] "storage-provisioner" [dceef9ee-88ec-4dec-9567-57006e2f8327] Running
	I1217 20:03:39.942134  670841 retry.go:31] will retry after 411.354945ms: missing components: kube-dns
	I1217 20:03:40.359871  670841 system_pods.go:86] 9 kube-system pods found
	I1217 20:03:40.359917  670841 system_pods.go:89] "calico-kube-controllers-5c676f698c-87rdt" [8344584d-c9d9-4d60-b9d6-8969b818dd96] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1217 20:03:40.359933  670841 system_pods.go:89] "calico-node-txfvq" [646b819d-dbb3-4aab-a10f-da140ba4c46c] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [ebpf-bootstrap]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1217 20:03:40.359942  670841 system_pods.go:89] "coredns-66bc5c9577-6zhb9" [396a593f-1e88-4649-9aac-1020901108e0] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 20:03:40.359949  670841 system_pods.go:89] "etcd-calico-601560" [a413bf3c-3090-4a23-888e-4172181ffdbc] Running
	I1217 20:03:40.359959  670841 system_pods.go:89] "kube-apiserver-calico-601560" [3802348e-f912-481e-bff3-0488a25434e5] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1217 20:03:40.359966  670841 system_pods.go:89] "kube-controller-manager-calico-601560" [02ef2474-6cde-416e-b6de-4b6d9922998d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1217 20:03:40.359980  670841 system_pods.go:89] "kube-proxy-l6w9t" [fd770e17-7fb8-432d-a348-d61c6f958e31] Running
	I1217 20:03:40.359988  670841 system_pods.go:89] "kube-scheduler-calico-601560" [70a6d486-c221-4c28-8726-6daf4871c8ab] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1217 20:03:40.359993  670841 system_pods.go:89] "storage-provisioner" [dceef9ee-88ec-4dec-9567-57006e2f8327] Running
	I1217 20:03:40.360014  670841 retry.go:31] will retry after 558.71846ms: missing components: kube-dns
	I1217 20:03:40.924257  670841 system_pods.go:86] 9 kube-system pods found
	I1217 20:03:40.924299  670841 system_pods.go:89] "calico-kube-controllers-5c676f698c-87rdt" [8344584d-c9d9-4d60-b9d6-8969b818dd96] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1217 20:03:40.924312  670841 system_pods.go:89] "calico-node-txfvq" [646b819d-dbb3-4aab-a10f-da140ba4c46c] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [ebpf-bootstrap]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1217 20:03:40.924322  670841 system_pods.go:89] "coredns-66bc5c9577-6zhb9" [396a593f-1e88-4649-9aac-1020901108e0] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 20:03:40.924329  670841 system_pods.go:89] "etcd-calico-601560" [a413bf3c-3090-4a23-888e-4172181ffdbc] Running
	I1217 20:03:40.924337  670841 system_pods.go:89] "kube-apiserver-calico-601560" [3802348e-f912-481e-bff3-0488a25434e5] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1217 20:03:40.924345  670841 system_pods.go:89] "kube-controller-manager-calico-601560" [02ef2474-6cde-416e-b6de-4b6d9922998d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1217 20:03:40.924352  670841 system_pods.go:89] "kube-proxy-l6w9t" [fd770e17-7fb8-432d-a348-d61c6f958e31] Running
	I1217 20:03:40.924358  670841 system_pods.go:89] "kube-scheduler-calico-601560" [70a6d486-c221-4c28-8726-6daf4871c8ab] Running
	I1217 20:03:40.924363  670841 system_pods.go:89] "storage-provisioner" [dceef9ee-88ec-4dec-9567-57006e2f8327] Running
	I1217 20:03:40.924384  670841 retry.go:31] will retry after 622.493188ms: missing components: kube-dns
	
	
	==> CRI-O <==
	Dec 17 20:03:05 embed-certs-147021 crio[572]: time="2025-12-17T20:03:05.192982774Z" level=info msg="Created container 4ebfa66d3b28eddecbfe86a86aaad09d79b307b1c6cdf47b395f4d1eba9148bf: kubernetes-dashboard/kubernetes-dashboard-855c9754f9-27rqf/kubernetes-dashboard" id=310ca953-a492-4702-8998-16bbf9e3585d name=/runtime.v1.RuntimeService/CreateContainer
	Dec 17 20:03:05 embed-certs-147021 crio[572]: time="2025-12-17T20:03:05.193944852Z" level=info msg="Starting container: 4ebfa66d3b28eddecbfe86a86aaad09d79b307b1c6cdf47b395f4d1eba9148bf" id=05fb3d16-6e5d-426c-a0ba-ad15cafc2222 name=/runtime.v1.RuntimeService/StartContainer
	Dec 17 20:03:05 embed-certs-147021 crio[572]: time="2025-12-17T20:03:05.196601784Z" level=info msg="Started container" PID=1744 containerID=4ebfa66d3b28eddecbfe86a86aaad09d79b307b1c6cdf47b395f4d1eba9148bf description=kubernetes-dashboard/kubernetes-dashboard-855c9754f9-27rqf/kubernetes-dashboard id=05fb3d16-6e5d-426c-a0ba-ad15cafc2222 name=/runtime.v1.RuntimeService/StartContainer sandboxID=0f8131503cb2aaf2cbfe502959515c05d035e1514b90b74859656ed4eb04939d
	Dec 17 20:03:19 embed-certs-147021 crio[572]: time="2025-12-17T20:03:19.57679923Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=280fe226-71c4-49ba-ba79-20ca9125afdc name=/runtime.v1.ImageService/ImageStatus
	Dec 17 20:03:19 embed-certs-147021 crio[572]: time="2025-12-17T20:03:19.579780824Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=dc27c7fc-646b-46da-ada0-6206e546ea13 name=/runtime.v1.ImageService/ImageStatus
	Dec 17 20:03:19 embed-certs-147021 crio[572]: time="2025-12-17T20:03:19.583086349Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-84b8z/dashboard-metrics-scraper" id=85a02343-2ac2-4d64-8347-a303942e9209 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 17 20:03:19 embed-certs-147021 crio[572]: time="2025-12-17T20:03:19.583248964Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 20:03:19 embed-certs-147021 crio[572]: time="2025-12-17T20:03:19.592265481Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 20:03:19 embed-certs-147021 crio[572]: time="2025-12-17T20:03:19.592804809Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 20:03:19 embed-certs-147021 crio[572]: time="2025-12-17T20:03:19.621787138Z" level=info msg="Created container 7d20bd215cfe13c8e4ea6af1ef233c20548a09cd11187637eda9e9466894c33b: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-84b8z/dashboard-metrics-scraper" id=85a02343-2ac2-4d64-8347-a303942e9209 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 17 20:03:19 embed-certs-147021 crio[572]: time="2025-12-17T20:03:19.622549831Z" level=info msg="Starting container: 7d20bd215cfe13c8e4ea6af1ef233c20548a09cd11187637eda9e9466894c33b" id=6a978afe-b437-4e93-b8ff-a2d2d3b0c1c8 name=/runtime.v1.RuntimeService/StartContainer
	Dec 17 20:03:19 embed-certs-147021 crio[572]: time="2025-12-17T20:03:19.624942722Z" level=info msg="Started container" PID=1762 containerID=7d20bd215cfe13c8e4ea6af1ef233c20548a09cd11187637eda9e9466894c33b description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-84b8z/dashboard-metrics-scraper id=6a978afe-b437-4e93-b8ff-a2d2d3b0c1c8 name=/runtime.v1.RuntimeService/StartContainer sandboxID=a4020d8e1f7665847acdc9a95cc59e1e385806fb4667cd2e1c55df7880f1d07d
	Dec 17 20:03:19 embed-certs-147021 crio[572]: time="2025-12-17T20:03:19.742313119Z" level=info msg="Removing container: bc9c37e2406791371870f72e3b28aec2a49d95707bb0dbeff1532a5040f1f61e" id=7453514c-8ad3-4f89-9248-eef0917c0ade name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 17 20:03:19 embed-certs-147021 crio[572]: time="2025-12-17T20:03:19.754319235Z" level=info msg="Removed container bc9c37e2406791371870f72e3b28aec2a49d95707bb0dbeff1532a5040f1f61e: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-84b8z/dashboard-metrics-scraper" id=7453514c-8ad3-4f89-9248-eef0917c0ade name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 17 20:03:24 embed-certs-147021 crio[572]: time="2025-12-17T20:03:24.757193521Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=aa5f7118-ccd0-4fa2-8c56-8d08ba5f840e name=/runtime.v1.ImageService/ImageStatus
	Dec 17 20:03:24 embed-certs-147021 crio[572]: time="2025-12-17T20:03:24.758413451Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=eae74e2d-18c2-45a0-b44a-2d12fd6cb2da name=/runtime.v1.ImageService/ImageStatus
	Dec 17 20:03:24 embed-certs-147021 crio[572]: time="2025-12-17T20:03:24.759538275Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=89531671-2b5a-4110-83d0-966d2e6d8658 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 17 20:03:24 embed-certs-147021 crio[572]: time="2025-12-17T20:03:24.759678247Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 20:03:24 embed-certs-147021 crio[572]: time="2025-12-17T20:03:24.764542451Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 20:03:24 embed-certs-147021 crio[572]: time="2025-12-17T20:03:24.764733392Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/7d03fb8591ffad9224341ce3fc4b64ba94751e817f0559f188311253b196c956/merged/etc/passwd: no such file or directory"
	Dec 17 20:03:24 embed-certs-147021 crio[572]: time="2025-12-17T20:03:24.764763186Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/7d03fb8591ffad9224341ce3fc4b64ba94751e817f0559f188311253b196c956/merged/etc/group: no such file or directory"
	Dec 17 20:03:24 embed-certs-147021 crio[572]: time="2025-12-17T20:03:24.765043016Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Dec 17 20:03:24 embed-certs-147021 crio[572]: time="2025-12-17T20:03:24.795263289Z" level=info msg="Created container a97831dd0cfa9d42e8bd7fafa0510d4d2b2a18070aac74aa247e55852e8e114e: kube-system/storage-provisioner/storage-provisioner" id=89531671-2b5a-4110-83d0-966d2e6d8658 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 17 20:03:24 embed-certs-147021 crio[572]: time="2025-12-17T20:03:24.795937565Z" level=info msg="Starting container: a97831dd0cfa9d42e8bd7fafa0510d4d2b2a18070aac74aa247e55852e8e114e" id=c3f3bc47-fe93-4e5e-a572-ae3066f18002 name=/runtime.v1.RuntimeService/StartContainer
	Dec 17 20:03:24 embed-certs-147021 crio[572]: time="2025-12-17T20:03:24.798285384Z" level=info msg="Started container" PID=1776 containerID=a97831dd0cfa9d42e8bd7fafa0510d4d2b2a18070aac74aa247e55852e8e114e description=kube-system/storage-provisioner/storage-provisioner id=c3f3bc47-fe93-4e5e-a572-ae3066f18002 name=/runtime.v1.RuntimeService/StartContainer sandboxID=09d85284adef2a550abdb9cd1b80ec22c440fd0dd844ec4ce3a5fd6a78991530
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	a97831dd0cfa9       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           21 seconds ago      Running             storage-provisioner         1                   09d85284adef2       storage-provisioner                          kube-system
	7d20bd215cfe1       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           26 seconds ago      Exited              dashboard-metrics-scraper   2                   a4020d8e1f766       dashboard-metrics-scraper-6ffb444bf9-84b8z   kubernetes-dashboard
	4ebfa66d3b28e       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   40 seconds ago      Running             kubernetes-dashboard        0                   0f8131503cb2a       kubernetes-dashboard-855c9754f9-27rqf        kubernetes-dashboard
	a12f276e6990e       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           52 seconds ago      Running             busybox                     1                   b91d101279124       busybox                                      default
	42c9fb76fa617       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           52 seconds ago      Running             coredns                     0                   c0ec26f00e677       coredns-66bc5c9577-wkvhv                     kube-system
	2766a8fcb5ebd       36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691                                           52 seconds ago      Running             kube-proxy                  0                   98e5a82c47446       kube-proxy-nwn9n                             kube-system
	537a5407ce604       4921d7a6dffa922dd679732ba4797085c4f39e9a53bee8b6fdb1d463e8571251                                           52 seconds ago      Running             kindnet-cni                 0                   e9975a064bca8       kindnet-qp6z8                                kube-system
	138ac303d832d       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           52 seconds ago      Exited              storage-provisioner         0                   09d85284adef2       storage-provisioner                          kube-system
	908edcd5f5289       a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1                                           54 seconds ago      Running             etcd                        0                   6eca325648229       etcd-embed-certs-147021                      kube-system
	9609c0cfa32a6       aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c                                           54 seconds ago      Running             kube-apiserver              0                   22da4e7ab7c21       kube-apiserver-embed-certs-147021            kube-system
	65e71064f4502       5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942                                           54 seconds ago      Running             kube-controller-manager     0                   1dacda031bdb4       kube-controller-manager-embed-certs-147021   kube-system
	d703ea40f171a       aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78                                           54 seconds ago      Running             kube-scheduler              0                   2bd8523b3388c       kube-scheduler-embed-certs-147021            kube-system
	
	
	==> coredns [42c9fb76fa6175d615c9c78f7030f741afc7310992f335396b1970fe704fefae] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:58937 - 31191 "HINFO IN 2680119756027112146.7582208013871341038. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.424419354s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               embed-certs-147021
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-147021
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2e96f676eb7e96389e85fe0658a4ede4c4ba6924
	                    minikube.k8s.io/name=embed-certs-147021
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_17T20_01_55_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 17 Dec 2025 20:01:52 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-147021
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 17 Dec 2025 20:03:33 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 17 Dec 2025 20:03:34 +0000   Wed, 17 Dec 2025 20:01:50 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 17 Dec 2025 20:03:34 +0000   Wed, 17 Dec 2025 20:01:50 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 17 Dec 2025 20:03:34 +0000   Wed, 17 Dec 2025 20:01:50 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 17 Dec 2025 20:03:34 +0000   Wed, 17 Dec 2025 20:02:13 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    embed-certs-147021
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 99cc213c06a11cdf07b2a4d26942818a
	  System UUID:                c55125f4-5cb9-479d-a732-b6dc1626ae27
	  Boot ID:                    832664c8-407a-4bff-a432-3bbc3f20421e
	  Kernel Version:             6.8.0-1045-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.3
	  Kubelet Version:            v1.34.3
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         90s
	  kube-system                 coredns-66bc5c9577-wkvhv                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     106s
	  kube-system                 etcd-embed-certs-147021                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         112s
	  kube-system                 kindnet-qp6z8                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      106s
	  kube-system                 kube-apiserver-embed-certs-147021             250m (3%)     0 (0%)      0 (0%)           0 (0%)         112s
	  kube-system                 kube-controller-manager-embed-certs-147021    200m (2%)     0 (0%)      0 (0%)           0 (0%)         112s
	  kube-system                 kube-proxy-nwn9n                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         106s
	  kube-system                 kube-scheduler-embed-certs-147021             100m (1%)     0 (0%)      0 (0%)           0 (0%)         112s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         106s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-84b8z    0 (0%)        0 (0%)      0 (0%)           0 (0%)         49s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-27rqf         0 (0%)        0 (0%)      0 (0%)           0 (0%)         49s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 104s                 kube-proxy       
	  Normal  Starting                 51s                  kube-proxy       
	  Normal  Starting                 116s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  116s (x8 over 116s)  kubelet          Node embed-certs-147021 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    116s (x8 over 116s)  kubelet          Node embed-certs-147021 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     116s (x8 over 116s)  kubelet          Node embed-certs-147021 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    112s                 kubelet          Node embed-certs-147021 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  112s                 kubelet          Node embed-certs-147021 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     112s                 kubelet          Node embed-certs-147021 status is now: NodeHasSufficientPID
	  Normal  Starting                 112s                 kubelet          Starting kubelet.
	  Normal  RegisteredNode           107s                 node-controller  Node embed-certs-147021 event: Registered Node embed-certs-147021 in Controller
	  Normal  NodeReady                93s                  kubelet          Node embed-certs-147021 status is now: NodeReady
	  Normal  Starting                 56s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  56s (x8 over 56s)    kubelet          Node embed-certs-147021 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    56s (x8 over 56s)    kubelet          Node embed-certs-147021 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     56s (x8 over 56s)    kubelet          Node embed-certs-147021 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           50s                  node-controller  Node embed-certs-147021 event: Registered Node embed-certs-147021 in Controller
	
	
	==> dmesg <==
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 02 bf cf fd 8a f3 08 06
	[  +0.000372] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 46 d7 50 f9 50 96 08 06
	[Dec17 19:26] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000011] ll header: 00000000: 12 b8 6e 1b fb 93 de a2 46 23 bd 1e 08 00
	[  +1.015318] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 12 b8 6e 1b fb 93 de a2 46 23 bd 1e 08 00
	[  +1.023837] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 12 b8 6e 1b fb 93 de a2 46 23 bd 1e 08 00
	[  +1.023872] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: 12 b8 6e 1b fb 93 de a2 46 23 bd 1e 08 00
	[  +1.023881] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 12 b8 6e 1b fb 93 de a2 46 23 bd 1e 08 00
	[  +1.023899] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 12 b8 6e 1b fb 93 de a2 46 23 bd 1e 08 00
	[  +2.047807] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: 12 b8 6e 1b fb 93 de a2 46 23 bd 1e 08 00
	[  +4.031540] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000022] ll header: 00000000: 12 b8 6e 1b fb 93 de a2 46 23 bd 1e 08 00
	[  +8.319118] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000024] ll header: 00000000: 12 b8 6e 1b fb 93 de a2 46 23 bd 1e 08 00
	[ +16.382218] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 12 b8 6e 1b fb 93 de a2 46 23 bd 1e 08 00
	[Dec17 19:27] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 12 b8 6e 1b fb 93 de a2 46 23 bd 1e 08 00
	
	
	==> etcd [908edcd5f5289ef7311867639a5128a59a15dad0583e878557accbf26efa79fb] <==
	{"level":"warn","ts":"2025-12-17T20:02:52.446622Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53716","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T20:02:52.455420Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53730","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T20:02:52.463791Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53764","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T20:02:52.472174Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53772","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T20:02:52.480345Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53804","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T20:02:52.489515Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53814","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T20:02:52.497207Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53826","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T20:02:52.504747Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53852","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T20:02:52.512635Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34142","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T20:02:52.519009Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34158","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T20:02:52.525477Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34172","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T20:02:52.533099Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34194","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T20:02:52.541256Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34212","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T20:02:52.548024Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34248","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T20:02:52.555218Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34262","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T20:02:52.563696Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34274","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T20:02:52.571559Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34292","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T20:02:52.579245Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34300","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T20:02:52.599349Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34314","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T20:02:52.607744Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34330","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-17T20:02:52.616224Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34354","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-17T20:03:10.257578Z","caller":"traceutil/trace.go:172","msg":"trace[1477758015] linearizableReadLoop","detail":"{readStateIndex:678; appliedIndex:678; }","duration":"172.845086ms","start":"2025-12-17T20:03:10.084709Z","end":"2025-12-17T20:03:10.257554Z","steps":["trace[1477758015] 'read index received'  (duration: 172.836184ms)","trace[1477758015] 'applied index is now lower than readState.Index'  (duration: 7.662µs)"],"step_count":2}
	{"level":"warn","ts":"2025-12-17T20:03:10.285189Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"200.413374ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-17T20:03:10.285276Z","caller":"traceutil/trace.go:172","msg":"trace[1788756996] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:646; }","duration":"200.546213ms","start":"2025-12-17T20:03:10.084703Z","end":"2025-12-17T20:03:10.285249Z","steps":["trace[1788756996] 'agreement among raft nodes before linearized reading'  (duration: 172.943275ms)","trace[1788756996] 'range keys from in-memory index tree'  (duration: 27.446754ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-17T20:03:10.285322Z","caller":"traceutil/trace.go:172","msg":"trace[384915860] transaction","detail":"{read_only:false; response_revision:647; number_of_response:1; }","duration":"203.549005ms","start":"2025-12-17T20:03:10.081762Z","end":"2025-12-17T20:03:10.285311Z","steps":["trace[384915860] 'process raft request'  (duration: 175.87534ms)","trace[384915860] 'compare'  (duration: 27.544232ms)"],"step_count":2}
	
	
	==> kernel <==
	 20:03:46 up  1:46,  0 user,  load average: 4.83, 3.96, 2.73
	Linux embed-certs-147021 6.8.0-1045-gcp #48~22.04.1-Ubuntu SMP Tue Nov 25 13:07:56 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [537a5407ce604a89aeaa3dfb925609467a6bd3eeb7abd61d4ca526f32aafd92b] <==
	I1217 20:02:54.192454       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1217 20:02:54.192793       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1217 20:02:54.193039       1 main.go:148] setting mtu 1500 for CNI 
	I1217 20:02:54.193069       1 main.go:178] kindnetd IP family: "ipv4"
	I1217 20:02:54.193114       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-12-17T20:02:54Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1217 20:02:54.496225       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1217 20:02:54.496840       1 controller.go:381] "Waiting for informer caches to sync"
	I1217 20:02:54.496901       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1217 20:02:54.508283       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1217 20:02:54.890516       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1217 20:02:54.890563       1 metrics.go:72] Registering metrics
	I1217 20:02:54.890636       1 controller.go:711] "Syncing nftables rules"
	I1217 20:03:04.494971       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1217 20:03:04.495055       1 main.go:301] handling current node
	I1217 20:03:14.495284       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1217 20:03:14.495333       1 main.go:301] handling current node
	I1217 20:03:24.494995       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1217 20:03:24.495030       1 main.go:301] handling current node
	I1217 20:03:34.495006       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1217 20:03:34.495050       1 main.go:301] handling current node
	I1217 20:03:44.502211       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1217 20:03:44.502360       1 main.go:301] handling current node
	
	
	==> kube-apiserver [9609c0cfa32a680d1b01f25906eb3fc99966c8e66cc7b424a4aaf43f25353e40] <==
	I1217 20:02:53.293656       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1217 20:02:53.293651       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1217 20:02:53.293549       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1217 20:02:53.295215       1 aggregator.go:171] initial CRD sync complete...
	I1217 20:02:53.295257       1 autoregister_controller.go:144] Starting autoregister controller
	I1217 20:02:53.295283       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1217 20:02:53.295306       1 cache.go:39] Caches are synced for autoregister controller
	I1217 20:02:53.309069       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1217 20:02:53.316135       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	E1217 20:02:53.317605       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1217 20:02:53.344373       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1217 20:02:53.354797       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1217 20:02:53.551002       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1217 20:02:53.551154       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1217 20:02:53.671723       1 controller.go:667] quota admission added evaluator for: namespaces
	I1217 20:02:53.717025       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1217 20:02:53.739397       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1217 20:02:53.750299       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1217 20:02:53.802829       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.109.75.181"}
	I1217 20:02:53.814778       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.99.108.200"}
	I1217 20:02:54.197831       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1217 20:02:56.873944       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1217 20:02:56.874012       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1217 20:02:57.023283       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1217 20:02:57.123254       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [65e71064f45025b16a8eeb57a2312f4a95a800aca4e77340fff8eb1b3e67c18d] <==
	I1217 20:02:56.619357       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1217 20:02:56.619391       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1217 20:02:56.619416       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1217 20:02:56.619419       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1217 20:02:56.619781       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1217 20:02:56.619823       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1217 20:02:56.619847       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1217 20:02:56.619911       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1217 20:02:56.620037       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1217 20:02:56.620224       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1217 20:02:56.620328       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="embed-certs-147021"
	I1217 20:02:56.620361       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1217 20:02:56.621069       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1217 20:02:56.621124       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1217 20:02:56.621571       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1217 20:02:56.622827       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1217 20:02:56.626004       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1217 20:02:56.626149       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1217 20:02:56.626210       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1217 20:02:56.628262       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1217 20:02:56.628413       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1217 20:02:56.628463       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1217 20:02:56.628472       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1217 20:02:56.628480       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1217 20:02:56.648607       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [2766a8fcb5ebd7aeee551794853fcba5d9153eca108dbbefaecfd962e38c5f3d] <==
	I1217 20:02:53.979196       1 server_linux.go:53] "Using iptables proxy"
	I1217 20:02:54.051955       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1217 20:02:54.153110       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1217 20:02:54.153262       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1217 20:02:54.153392       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1217 20:02:54.173375       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1217 20:02:54.173479       1 server_linux.go:132] "Using iptables Proxier"
	I1217 20:02:54.178983       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1217 20:02:54.179404       1 server.go:527] "Version info" version="v1.34.3"
	I1217 20:02:54.179438       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1217 20:02:54.181027       1 config.go:106] "Starting endpoint slice config controller"
	I1217 20:02:54.181053       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1217 20:02:54.181124       1 config.go:200] "Starting service config controller"
	I1217 20:02:54.181135       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1217 20:02:54.181125       1 config.go:403] "Starting serviceCIDR config controller"
	I1217 20:02:54.181154       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1217 20:02:54.181455       1 config.go:309] "Starting node config controller"
	I1217 20:02:54.181511       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1217 20:02:54.181538       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1217 20:02:54.281214       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1217 20:02:54.281251       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1217 20:02:54.281565       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [d703ea40f171a6defb08dbaa7f51e4cb839d82c4c6df2ff17c3ac6931834a231] <==
	I1217 20:02:53.282345       1 serving.go:386] Generated self-signed cert in-memory
	I1217 20:02:54.124386       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.3"
	I1217 20:02:54.124413       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1217 20:02:54.129539       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1217 20:02:54.129592       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1217 20:02:54.129592       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1217 20:02:54.129627       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1217 20:02:54.129653       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1217 20:02:54.129714       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1217 20:02:54.129975       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1217 20:02:54.130040       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1217 20:02:54.230604       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1217 20:02:54.230840       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1217 20:02:54.230893       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	
	
	==> kubelet <==
	Dec 17 20:02:57 embed-certs-147021 kubelet[724]: I1217 20:02:57.173441     724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v4g4g\" (UniqueName: \"kubernetes.io/projected/5673bd8c-db08-4434-8ac7-ea0584623f5b-kube-api-access-v4g4g\") pod \"dashboard-metrics-scraper-6ffb444bf9-84b8z\" (UID: \"5673bd8c-db08-4434-8ac7-ea0584623f5b\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-84b8z"
	Dec 17 20:02:57 embed-certs-147021 kubelet[724]: I1217 20:02:57.173527     724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/0513181f-349f-406d-bee0-2833c0e27ccb-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-27rqf\" (UID: \"0513181f-349f-406d-bee0-2833c0e27ccb\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-27rqf"
	Dec 17 20:02:57 embed-certs-147021 kubelet[724]: I1217 20:02:57.173628     724 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2wg6d\" (UniqueName: \"kubernetes.io/projected/0513181f-349f-406d-bee0-2833c0e27ccb-kube-api-access-2wg6d\") pod \"kubernetes-dashboard-855c9754f9-27rqf\" (UID: \"0513181f-349f-406d-bee0-2833c0e27ccb\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-27rqf"
	Dec 17 20:03:01 embed-certs-147021 kubelet[724]: I1217 20:03:01.687098     724 scope.go:117] "RemoveContainer" containerID="b29d05f5c1822dd2ad509ea814df77b389ed7bbc8af135fe6980bed69d679cb7"
	Dec 17 20:03:02 embed-certs-147021 kubelet[724]: I1217 20:03:02.691347     724 scope.go:117] "RemoveContainer" containerID="b29d05f5c1822dd2ad509ea814df77b389ed7bbc8af135fe6980bed69d679cb7"
	Dec 17 20:03:02 embed-certs-147021 kubelet[724]: I1217 20:03:02.691700     724 scope.go:117] "RemoveContainer" containerID="bc9c37e2406791371870f72e3b28aec2a49d95707bb0dbeff1532a5040f1f61e"
	Dec 17 20:03:02 embed-certs-147021 kubelet[724]: E1217 20:03:02.692045     724 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-84b8z_kubernetes-dashboard(5673bd8c-db08-4434-8ac7-ea0584623f5b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-84b8z" podUID="5673bd8c-db08-4434-8ac7-ea0584623f5b"
	Dec 17 20:03:03 embed-certs-147021 kubelet[724]: I1217 20:03:03.698423     724 scope.go:117] "RemoveContainer" containerID="bc9c37e2406791371870f72e3b28aec2a49d95707bb0dbeff1532a5040f1f61e"
	Dec 17 20:03:03 embed-certs-147021 kubelet[724]: E1217 20:03:03.698631     724 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-84b8z_kubernetes-dashboard(5673bd8c-db08-4434-8ac7-ea0584623f5b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-84b8z" podUID="5673bd8c-db08-4434-8ac7-ea0584623f5b"
	Dec 17 20:03:05 embed-certs-147021 kubelet[724]: I1217 20:03:05.721191     724 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-27rqf" podStartSLOduration=0.998599373 podStartE2EDuration="8.721165239s" podCreationTimestamp="2025-12-17 20:02:57 +0000 UTC" firstStartedPulling="2025-12-17 20:02:57.41837367 +0000 UTC m=+6.983545829" lastFinishedPulling="2025-12-17 20:03:05.140939534 +0000 UTC m=+14.706111695" observedRunningTime="2025-12-17 20:03:05.720752329 +0000 UTC m=+15.285924495" watchObservedRunningTime="2025-12-17 20:03:05.721165239 +0000 UTC m=+15.286337406"
	Dec 17 20:03:06 embed-certs-147021 kubelet[724]: I1217 20:03:06.253002     724 scope.go:117] "RemoveContainer" containerID="bc9c37e2406791371870f72e3b28aec2a49d95707bb0dbeff1532a5040f1f61e"
	Dec 17 20:03:06 embed-certs-147021 kubelet[724]: E1217 20:03:06.253278     724 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-84b8z_kubernetes-dashboard(5673bd8c-db08-4434-8ac7-ea0584623f5b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-84b8z" podUID="5673bd8c-db08-4434-8ac7-ea0584623f5b"
	Dec 17 20:03:19 embed-certs-147021 kubelet[724]: I1217 20:03:19.576176     724 scope.go:117] "RemoveContainer" containerID="bc9c37e2406791371870f72e3b28aec2a49d95707bb0dbeff1532a5040f1f61e"
	Dec 17 20:03:19 embed-certs-147021 kubelet[724]: I1217 20:03:19.740934     724 scope.go:117] "RemoveContainer" containerID="bc9c37e2406791371870f72e3b28aec2a49d95707bb0dbeff1532a5040f1f61e"
	Dec 17 20:03:19 embed-certs-147021 kubelet[724]: I1217 20:03:19.741222     724 scope.go:117] "RemoveContainer" containerID="7d20bd215cfe13c8e4ea6af1ef233c20548a09cd11187637eda9e9466894c33b"
	Dec 17 20:03:19 embed-certs-147021 kubelet[724]: E1217 20:03:19.741484     724 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-84b8z_kubernetes-dashboard(5673bd8c-db08-4434-8ac7-ea0584623f5b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-84b8z" podUID="5673bd8c-db08-4434-8ac7-ea0584623f5b"
	Dec 17 20:03:24 embed-certs-147021 kubelet[724]: I1217 20:03:24.756658     724 scope.go:117] "RemoveContainer" containerID="138ac303d832d356d24635c198a00e7be358427c23bd8fdce8ba3aa0818c1350"
	Dec 17 20:03:26 embed-certs-147021 kubelet[724]: I1217 20:03:26.253843     724 scope.go:117] "RemoveContainer" containerID="7d20bd215cfe13c8e4ea6af1ef233c20548a09cd11187637eda9e9466894c33b"
	Dec 17 20:03:26 embed-certs-147021 kubelet[724]: E1217 20:03:26.254025     724 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-84b8z_kubernetes-dashboard(5673bd8c-db08-4434-8ac7-ea0584623f5b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-84b8z" podUID="5673bd8c-db08-4434-8ac7-ea0584623f5b"
	Dec 17 20:03:37 embed-certs-147021 kubelet[724]: I1217 20:03:37.576315     724 scope.go:117] "RemoveContainer" containerID="7d20bd215cfe13c8e4ea6af1ef233c20548a09cd11187637eda9e9466894c33b"
	Dec 17 20:03:37 embed-certs-147021 kubelet[724]: E1217 20:03:37.576525     724 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-84b8z_kubernetes-dashboard(5673bd8c-db08-4434-8ac7-ea0584623f5b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-84b8z" podUID="5673bd8c-db08-4434-8ac7-ea0584623f5b"
	Dec 17 20:03:39 embed-certs-147021 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Dec 17 20:03:39 embed-certs-147021 systemd[1]: kubelet.service: Deactivated successfully.
	Dec 17 20:03:39 embed-certs-147021 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 20:03:39 embed-certs-147021 systemd[1]: kubelet.service: Consumed 1.735s CPU time.
	
	
	==> kubernetes-dashboard [4ebfa66d3b28eddecbfe86a86aaad09d79b307b1c6cdf47b395f4d1eba9148bf] <==
	2025/12/17 20:03:05 Starting overwatch
	2025/12/17 20:03:05 Using namespace: kubernetes-dashboard
	2025/12/17 20:03:05 Using in-cluster config to connect to apiserver
	2025/12/17 20:03:05 Using secret token for csrf signing
	2025/12/17 20:03:05 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/12/17 20:03:05 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/12/17 20:03:05 Successful initial request to the apiserver, version: v1.34.3
	2025/12/17 20:03:05 Generating JWE encryption key
	2025/12/17 20:03:05 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/12/17 20:03:05 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/12/17 20:03:05 Initializing JWE encryption key from synchronized object
	2025/12/17 20:03:05 Creating in-cluster Sidecar client
	2025/12/17 20:03:05 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/12/17 20:03:05 Serving insecurely on HTTP port: 9090
	2025/12/17 20:03:35 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [138ac303d832d356d24635c198a00e7be358427c23bd8fdce8ba3aa0818c1350] <==
	I1217 20:02:53.934706       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1217 20:03:23.938561       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [a97831dd0cfa9d42e8bd7fafa0510d4d2b2a18070aac74aa247e55852e8e114e] <==
	I1217 20:03:24.813213       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1217 20:03:24.823054       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1217 20:03:24.823137       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1217 20:03:24.825815       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 20:03:28.281515       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 20:03:32.542579       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 20:03:36.142052       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 20:03:39.197700       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 20:03:42.221298       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 20:03:42.227792       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1217 20:03:42.228045       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1217 20:03:42.228301       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-147021_a7504213-e276-4f45-9ca2-4efa8775deb0!
	I1217 20:03:42.228552       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"910d36f2-445e-4325-a1df-6c5c1d1eea0a", APIVersion:"v1", ResourceVersion:"679", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-147021_a7504213-e276-4f45-9ca2-4efa8775deb0 became leader
	W1217 20:03:42.231297       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 20:03:42.240763       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1217 20:03:42.329306       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-147021_a7504213-e276-4f45-9ca2-4efa8775deb0!
	W1217 20:03:44.245467       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 20:03:44.250037       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 20:03:46.254216       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1217 20:03:46.258409       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-147021 -n embed-certs-147021
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-147021 -n embed-certs-147021: exit status 2 (369.711127ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:270: (dbg) Run:  kubectl --context embed-certs-147021 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:294: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (7.75s)
E1217 20:05:13.886108  375797 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/no-preload-832842/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 20:05:18.050190  375797 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/old-k8s-version-894575/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"

                                                
                                    

Test pass (354/415)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 4.61
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.08
9 TestDownloadOnly/v1.28.0/DeleteAll 0.24
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.15
12 TestDownloadOnly/v1.34.3/json-events 2.99
13 TestDownloadOnly/v1.34.3/preload-exists 0
17 TestDownloadOnly/v1.34.3/LogsDuration 0.07
18 TestDownloadOnly/v1.34.3/DeleteAll 0.23
19 TestDownloadOnly/v1.34.3/DeleteAlwaysSucceeds 0.16
21 TestDownloadOnly/v1.35.0-rc.1/json-events 2.82
22 TestDownloadOnly/v1.35.0-rc.1/preload-exists 0
26 TestDownloadOnly/v1.35.0-rc.1/LogsDuration 0.08
27 TestDownloadOnly/v1.35.0-rc.1/DeleteAll 0.24
28 TestDownloadOnly/v1.35.0-rc.1/DeleteAlwaysSucceeds 0.15
29 TestDownloadOnlyKic 0.44
30 TestBinaryMirror 0.84
31 TestOffline 57.01
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.07
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.07
36 TestAddons/Setup 93.35
40 TestAddons/serial/GCPAuth/Namespaces 0.12
41 TestAddons/serial/GCPAuth/FakeCredentials 8.44
57 TestAddons/StoppedEnableDisable 16.76
58 TestCertOptions 23.39
59 TestCertExpiration 218.63
61 TestForceSystemdFlag 27.23
62 TestForceSystemdEnv 37.34
67 TestErrorSpam/setup 22.27
68 TestErrorSpam/start 0.68
69 TestErrorSpam/status 0.97
70 TestErrorSpam/pause 6.55
71 TestErrorSpam/unpause 6.14
72 TestErrorSpam/stop 12.67
75 TestFunctional/serial/CopySyncFile 0
76 TestFunctional/serial/StartWithProxy 41.75
77 TestFunctional/serial/AuditLog 0
78 TestFunctional/serial/SoftStart 6.26
79 TestFunctional/serial/KubeContext 0.05
80 TestFunctional/serial/KubectlGetPods 0.06
83 TestFunctional/serial/CacheCmd/cache/add_remote 2.57
84 TestFunctional/serial/CacheCmd/cache/add_local 0.92
85 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.07
86 TestFunctional/serial/CacheCmd/cache/list 0.07
87 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.3
88 TestFunctional/serial/CacheCmd/cache/cache_reload 1.56
89 TestFunctional/serial/CacheCmd/cache/delete 0.13
90 TestFunctional/serial/MinikubeKubectlCmd 0.13
91 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.12
92 TestFunctional/serial/ExtraConfig 68.13
93 TestFunctional/serial/ComponentHealth 0.07
94 TestFunctional/serial/LogsCmd 1.29
95 TestFunctional/serial/LogsFileCmd 1.32
96 TestFunctional/serial/InvalidService 4.02
98 TestFunctional/parallel/ConfigCmd 0.51
99 TestFunctional/parallel/DashboardCmd 9.66
100 TestFunctional/parallel/DryRun 0.42
101 TestFunctional/parallel/InternationalLanguage 0.22
102 TestFunctional/parallel/StatusCmd 1.01
106 TestFunctional/parallel/ServiceCmdConnect 6.76
107 TestFunctional/parallel/AddonsCmd 0.18
108 TestFunctional/parallel/PersistentVolumeClaim 18.31
110 TestFunctional/parallel/SSHCmd 0.79
111 TestFunctional/parallel/CpCmd 2.1
112 TestFunctional/parallel/MySQL 21.97
113 TestFunctional/parallel/FileSync 0.33
114 TestFunctional/parallel/CertSync 2.11
118 TestFunctional/parallel/NodeLabels 0.06
120 TestFunctional/parallel/NonActiveRuntimeDisabled 0.69
122 TestFunctional/parallel/License 0.24
123 TestFunctional/parallel/UpdateContextCmd/no_changes 0.15
124 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.15
125 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.17
126 TestFunctional/parallel/ImageCommands/ImageListShort 0.25
127 TestFunctional/parallel/ImageCommands/ImageListTable 0.25
128 TestFunctional/parallel/ImageCommands/ImageListJson 1.74
129 TestFunctional/parallel/ImageCommands/ImageListYaml 0.25
130 TestFunctional/parallel/ImageCommands/ImageBuild 3.06
131 TestFunctional/parallel/ImageCommands/Setup 0.45
133 TestFunctional/parallel/ServiceCmd/DeployApp 14.22
135 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.51
136 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
138 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 13.26
139 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 4.1
140 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.67
141 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.36
142 TestFunctional/parallel/ImageCommands/ImageRemove 0.54
143 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.62
144 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.41
145 TestFunctional/parallel/ServiceCmd/List 0.52
146 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.06
147 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
151 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
152 TestFunctional/parallel/ServiceCmd/JSONOutput 0.94
153 TestFunctional/parallel/ServiceCmd/HTTPS 0.58
154 TestFunctional/parallel/ServiceCmd/Format 0.57
155 TestFunctional/parallel/ServiceCmd/URL 0.56
156 TestFunctional/parallel/ProfileCmd/profile_not_create 0.44
157 TestFunctional/parallel/ProfileCmd/profile_list 0.4
158 TestFunctional/parallel/ProfileCmd/profile_json_output 0.41
159 TestFunctional/parallel/MountCmd/any-port 7.08
160 TestFunctional/parallel/Version/short 0.07
161 TestFunctional/parallel/Version/components 0.54
162 TestFunctional/parallel/MountCmd/specific-port 1.91
163 TestFunctional/parallel/MountCmd/VerifyCleanup 1.89
164 TestFunctional/delete_echo-server_images 0.04
165 TestFunctional/delete_my-image_image 0.02
166 TestFunctional/delete_minikube_cached_images 0.02
170 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CopySyncFile 0
171 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/StartWithProxy 40.5
172 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/AuditLog 0
173 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/SoftStart 6.29
174 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/KubeContext 0.05
175 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/KubectlGetPods 0.06
178 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/add_remote 2.63
179 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/add_local 0.89
180 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/CacheDelete 0.07
181 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/list 0.07
182 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/verify_cache_inside_node 0.31
183 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/cache_reload 1.6
184 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/delete 0.14
185 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/MinikubeKubectlCmd 0.13
186 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/MinikubeKubectlCmdDirectly 0.13
187 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ExtraConfig 46.28
188 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ComponentHealth 0.07
189 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/LogsCmd 1.27
190 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/LogsFileCmd 1.29
191 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/InvalidService 4.12
193 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ConfigCmd 0.5
194 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DashboardCmd 15
195 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DryRun 0.43
196 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/InternationalLanguage 0.19
197 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/StatusCmd 1.08
201 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmdConnect 9.75
202 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/AddonsCmd 0.22
203 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim 25.71
205 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/SSHCmd 0.61
206 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CpCmd 1.95
207 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MySQL 22.15
208 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/FileSync 0.31
209 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CertSync 1.88
213 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NodeLabels 0.06
215 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NonActiveRuntimeDisabled 0.64
217 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/License 0.3
218 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/DeployApp 8.2
219 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_not_create 0.52
220 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/any-port 8.12
221 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_list 0.47
222 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_json_output 0.48
223 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_changes 0.17
224 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_minikube_cluster 0.16
225 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_clusters 0.16
226 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/List 1.04
227 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/specific-port 2.21
228 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/JSONOutput 0.99
229 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/HTTPS 0.46
230 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/Format 0.54
231 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/VerifyCleanup 1.88
232 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/URL 0.45
234 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/RunSecondTunnel 0.47
235 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/StartTunnel 0
237 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/WaitService/Setup 14.26
238 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/short 0.07
239 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/components 0.54
240 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListShort 0.28
241 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListTable 0.27
242 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListJson 0.27
243 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListYaml 0.28
244 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageBuild 2.81
245 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/Setup 0.17
246 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageLoadDaemon 1.08
247 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageReloadDaemon 0.84
248 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageTagAndLoadDaemon 1.07
249 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/WaitService/IngressIP 0.06
250 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/AccessDirect 0
254 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DeleteTunnel 0.11
255 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageSaveToFile 0.39
256 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageRemove 0.53
257 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageLoadFromFile 0.65
258 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageSaveDaemon 0.41
259 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_echo-server_images 0.04
260 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_my-image_image 0.02
261 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_minikube_cached_images 0.02
265 TestMultiControlPlane/serial/StartCluster 112.99
266 TestMultiControlPlane/serial/DeployApp 5
267 TestMultiControlPlane/serial/PingHostFromPods 1.14
268 TestMultiControlPlane/serial/AddWorkerNode 24.61
269 TestMultiControlPlane/serial/NodeLabels 0.07
270 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.95
271 TestMultiControlPlane/serial/CopyFile 18.3
272 TestMultiControlPlane/serial/StopSecondaryNode 18.89
273 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.73
274 TestMultiControlPlane/serial/RestartSecondaryNode 14.75
275 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.93
276 TestMultiControlPlane/serial/RestartClusterKeepsNodes 117.78
277 TestMultiControlPlane/serial/DeleteSecondaryNode 10.64
278 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.72
279 TestMultiControlPlane/serial/StopCluster 43.68
280 TestMultiControlPlane/serial/RestartCluster 56.99
281 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.72
282 TestMultiControlPlane/serial/AddSecondaryNode 39.27
283 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.94
288 TestJSONOutput/start/Command 41.48
289 TestJSONOutput/start/Audit 0
291 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
292 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
295 TestJSONOutput/pause/Audit 0
297 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
298 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
301 TestJSONOutput/unpause/Audit 0
303 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
304 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
306 TestJSONOutput/stop/Command 8.02
307 TestJSONOutput/stop/Audit 0
309 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
310 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
311 TestErrorJSONOutput 0.24
313 TestKicCustomNetwork/create_custom_network 30.91
314 TestKicCustomNetwork/use_default_bridge_network 22.79
315 TestKicExistingNetwork 23.61
316 TestKicCustomSubnet 23.74
317 TestKicStaticIP 25.9
318 TestMainNoArgs 0.06
319 TestMinikubeProfile 48.06
322 TestMountStart/serial/StartWithMountFirst 7.89
323 TestMountStart/serial/VerifyMountFirst 0.29
324 TestMountStart/serial/StartWithMountSecond 4.89
325 TestMountStart/serial/VerifyMountSecond 0.29
326 TestMountStart/serial/DeleteFirst 1.69
327 TestMountStart/serial/VerifyMountPostDelete 0.28
328 TestMountStart/serial/Stop 1.26
329 TestMountStart/serial/RestartStopped 7.56
330 TestMountStart/serial/VerifyMountPostStop 0.29
333 TestMultiNode/serial/FreshStart2Nodes 66.23
334 TestMultiNode/serial/DeployApp2Nodes 4.42
335 TestMultiNode/serial/PingHostFrom2Pods 0.77
336 TestMultiNode/serial/AddNode 24.95
337 TestMultiNode/serial/MultiNodeLabels 0.07
338 TestMultiNode/serial/ProfileList 0.69
339 TestMultiNode/serial/CopyFile 10.39
340 TestMultiNode/serial/StopNode 2.32
341 TestMultiNode/serial/StartAfterStop 7.29
342 TestMultiNode/serial/RestartKeepsNodes 78.72
343 TestMultiNode/serial/DeleteNode 5.32
344 TestMultiNode/serial/StopMultiNode 30.41
345 TestMultiNode/serial/RestartMultiNode 51.74
346 TestMultiNode/serial/ValidateNameConflict 25.82
351 TestPreload 101.83
353 TestScheduledStopUnix 98.64
356 TestInsufficientStorage 9.08
357 TestRunningBinaryUpgrade 51.91
359 TestKubernetesUpgrade 297.79
360 TestMissingContainerUpgrade 66.04
362 TestStoppedBinaryUpgrade/Setup 0.48
363 TestPause/serial/Start 60.81
364 TestStoppedBinaryUpgrade/Upgrade 62.91
372 TestPause/serial/SecondStartNoReconfiguration 7.93
373 TestStoppedBinaryUpgrade/MinikubeLogs 1.29
376 TestNoKubernetes/serial/StartNoK8sWithVersion 0.11
377 TestNoKubernetes/serial/StartWithK8s 24.32
378 TestNoKubernetes/serial/StartWithStopK8s 11.98
379 TestNoKubernetes/serial/Start 7.15
380 TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads 0
381 TestNoKubernetes/serial/VerifyK8sNotRunning 0.32
382 TestNoKubernetes/serial/ProfileList 31.62
390 TestNetworkPlugins/group/false 3.67
394 TestNoKubernetes/serial/Stop 1.3
395 TestNoKubernetes/serial/StartNoArgs 6.74
396 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.31
398 TestStartStop/group/old-k8s-version/serial/FirstStart 52.61
400 TestStartStop/group/no-preload/serial/FirstStart 46.16
401 TestStartStop/group/no-preload/serial/DeployApp 8.22
402 TestStartStop/group/old-k8s-version/serial/DeployApp 8.23
404 TestStartStop/group/no-preload/serial/Stop 16.29
406 TestStartStop/group/old-k8s-version/serial/Stop 15.98
407 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.22
408 TestStartStop/group/no-preload/serial/SecondStart 49.07
409 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.22
410 TestStartStop/group/old-k8s-version/serial/SecondStart 47.31
412 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 44.13
413 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6
414 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6
415 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.07
416 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.08
417 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.25
419 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.27
421 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 7.31
423 TestStartStop/group/newest-cni/serial/FirstStart 24.1
425 TestStartStop/group/embed-certs/serial/FirstStart 42.9
427 TestStartStop/group/default-k8s-diff-port/serial/Stop 18.4
428 TestStartStop/group/newest-cni/serial/DeployApp 0
430 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.29
431 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 47.18
432 TestStartStop/group/newest-cni/serial/Stop 18.64
433 TestStartStop/group/embed-certs/serial/DeployApp 7.24
434 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.23
435 TestStartStop/group/newest-cni/serial/SecondStart 11.33
437 TestStartStop/group/embed-certs/serial/Stop 17.35
438 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
439 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
440 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.28
442 TestNetworkPlugins/group/auto/Start 41.6
443 TestNetworkPlugins/group/kindnet/Start 48.56
444 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
445 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.25
446 TestStartStop/group/embed-certs/serial/SecondStart 44.28
447 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.09
448 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.31
450 TestNetworkPlugins/group/calico/Start 50.74
451 TestNetworkPlugins/group/auto/KubeletFlags 0.32
452 TestNetworkPlugins/group/auto/NetCatPod 9.2
453 TestNetworkPlugins/group/auto/DNS 0.16
454 TestNetworkPlugins/group/auto/Localhost 0.12
455 TestNetworkPlugins/group/auto/HairPin 0.11
456 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
457 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
458 TestNetworkPlugins/group/kindnet/KubeletFlags 0.38
459 TestNetworkPlugins/group/kindnet/NetCatPod 9.21
460 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.07
461 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.34
463 TestNetworkPlugins/group/kindnet/DNS 0.17
464 TestNetworkPlugins/group/kindnet/Localhost 0.13
465 TestNetworkPlugins/group/kindnet/HairPin 0.13
466 TestNetworkPlugins/group/custom-flannel/Start 53.83
467 TestNetworkPlugins/group/enable-default-cni/Start 69.39
468 TestNetworkPlugins/group/calico/ControllerPod 6.01
469 TestNetworkPlugins/group/calico/KubeletFlags 0.39
470 TestNetworkPlugins/group/calico/NetCatPod 9.24
471 TestNetworkPlugins/group/flannel/Start 45.9
472 TestNetworkPlugins/group/calico/DNS 0.14
473 TestNetworkPlugins/group/calico/Localhost 0.12
474 TestNetworkPlugins/group/calico/HairPin 0.11
475 TestNetworkPlugins/group/bridge/Start 63.73
476 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.33
477 TestNetworkPlugins/group/custom-flannel/NetCatPod 9.2
478 TestNetworkPlugins/group/custom-flannel/DNS 0.16
479 TestNetworkPlugins/group/custom-flannel/Localhost 0.12
480 TestNetworkPlugins/group/custom-flannel/HairPin 0.11
481 TestNetworkPlugins/group/flannel/ControllerPod 6.01
482 TestNetworkPlugins/group/flannel/KubeletFlags 0.33
483 TestNetworkPlugins/group/flannel/NetCatPod 9.22
484 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.44
485 TestNetworkPlugins/group/enable-default-cni/NetCatPod 9.27
486 TestNetworkPlugins/group/flannel/DNS 0.13
487 TestNetworkPlugins/group/flannel/Localhost 0.11
488 TestNetworkPlugins/group/flannel/HairPin 0.1
489 TestNetworkPlugins/group/enable-default-cni/DNS 0.14
490 TestNetworkPlugins/group/enable-default-cni/Localhost 0.1
491 TestNetworkPlugins/group/enable-default-cni/HairPin 0.1
492 TestNetworkPlugins/group/bridge/KubeletFlags 0.31
493 TestNetworkPlugins/group/bridge/NetCatPod 8.19
494 TestNetworkPlugins/group/bridge/DNS 0.11
495 TestNetworkPlugins/group/bridge/Localhost 0.09
496 TestNetworkPlugins/group/bridge/HairPin 0.09
x
+
TestDownloadOnly/v1.28.0/json-events (4.61s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-096016 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-096016 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (4.613518895s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (4.61s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1217 19:24:14.967385  375797 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
I1217 19:24:14.967484  375797 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22186-372245/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-096016
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-096016: exit status 85 (77.718949ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-096016 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-096016 │ jenkins │ v1.37.0 │ 17 Dec 25 19:24 UTC │          │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/17 19:24:10
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1217 19:24:10.409252  375810 out.go:360] Setting OutFile to fd 1 ...
	I1217 19:24:10.409486  375810 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 19:24:10.409494  375810 out.go:374] Setting ErrFile to fd 2...
	I1217 19:24:10.409498  375810 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 19:24:10.409897  375810 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22186-372245/.minikube/bin
	W1217 19:24:10.410047  375810 root.go:314] Error reading config file at /home/jenkins/minikube-integration/22186-372245/.minikube/config/config.json: open /home/jenkins/minikube-integration/22186-372245/.minikube/config/config.json: no such file or directory
	I1217 19:24:10.410559  375810 out.go:368] Setting JSON to true
	I1217 19:24:10.411611  375810 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":4001,"bootTime":1765995449,"procs":232,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1217 19:24:10.411674  375810 start.go:143] virtualization: kvm guest
	I1217 19:24:10.415851  375810 out.go:99] [download-only-096016] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1217 19:24:10.416051  375810 notify.go:221] Checking for updates...
	W1217 19:24:10.416055  375810 preload.go:354] Failed to list preload files: open /home/jenkins/minikube-integration/22186-372245/.minikube/cache/preloaded-tarball: no such file or directory
	I1217 19:24:10.417254  375810 out.go:171] MINIKUBE_LOCATION=22186
	I1217 19:24:10.418645  375810 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 19:24:10.420120  375810 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22186-372245/kubeconfig
	I1217 19:24:10.421448  375810 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22186-372245/.minikube
	I1217 19:24:10.422610  375810 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1217 19:24:10.425107  375810 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1217 19:24:10.425457  375810 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 19:24:10.449408  375810 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1217 19:24:10.449498  375810 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 19:24:10.507684  375810 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:63 SystemTime:2025-12-17 19:24:10.497502695 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1217 19:24:10.507798  375810 docker.go:319] overlay module found
	I1217 19:24:10.509312  375810 out.go:99] Using the docker driver based on user configuration
	I1217 19:24:10.509340  375810 start.go:309] selected driver: docker
	I1217 19:24:10.509352  375810 start.go:927] validating driver "docker" against <nil>
	I1217 19:24:10.509462  375810 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 19:24:10.567292  375810 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:63 SystemTime:2025-12-17 19:24:10.557450869 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1217 19:24:10.567487  375810 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1217 19:24:10.568021  375810 start_flags.go:410] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I1217 19:24:10.568213  375810 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1217 19:24:10.570024  375810 out.go:171] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-096016 host does not exist
	  To start a cluster, run: "minikube start -p download-only-096016"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.24s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.24s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-096016
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.3/json-events (2.99s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.3/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-266209 --force --alsologtostderr --kubernetes-version=v1.34.3 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-266209 --force --alsologtostderr --kubernetes-version=v1.34.3 --container-runtime=crio --driver=docker  --container-runtime=crio: (2.988511158s)
--- PASS: TestDownloadOnly/v1.34.3/json-events (2.99s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.3/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.3/preload-exists
I1217 19:24:18.423237  375797 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime crio
I1217 19:24:18.423286  375797 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22186-372245/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.3/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.3/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.3/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-266209
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-266209: exit status 85 (74.36744ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-096016 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-096016 │ jenkins │ v1.37.0 │ 17 Dec 25 19:24 UTC │                     │
	│ delete  │ --all                                                                                                                                                                     │ minikube             │ jenkins │ v1.37.0 │ 17 Dec 25 19:24 UTC │ 17 Dec 25 19:24 UTC │
	│ delete  │ -p download-only-096016                                                                                                                                                   │ download-only-096016 │ jenkins │ v1.37.0 │ 17 Dec 25 19:24 UTC │ 17 Dec 25 19:24 UTC │
	│ start   │ -o=json --download-only -p download-only-266209 --force --alsologtostderr --kubernetes-version=v1.34.3 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-266209 │ jenkins │ v1.37.0 │ 17 Dec 25 19:24 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/17 19:24:15
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1217 19:24:15.491047  376166 out.go:360] Setting OutFile to fd 1 ...
	I1217 19:24:15.491226  376166 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 19:24:15.491245  376166 out.go:374] Setting ErrFile to fd 2...
	I1217 19:24:15.491250  376166 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 19:24:15.491442  376166 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22186-372245/.minikube/bin
	I1217 19:24:15.491927  376166 out.go:368] Setting JSON to true
	I1217 19:24:15.492871  376166 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":4006,"bootTime":1765995449,"procs":202,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1217 19:24:15.492940  376166 start.go:143] virtualization: kvm guest
	I1217 19:24:15.494784  376166 out.go:99] [download-only-266209] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1217 19:24:15.495012  376166 notify.go:221] Checking for updates...
	I1217 19:24:15.496621  376166 out.go:171] MINIKUBE_LOCATION=22186
	I1217 19:24:15.498044  376166 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 19:24:15.499293  376166 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22186-372245/kubeconfig
	I1217 19:24:15.500483  376166 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22186-372245/.minikube
	I1217 19:24:15.501782  376166 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1217 19:24:15.504405  376166 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1217 19:24:15.504717  376166 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 19:24:15.531542  376166 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1217 19:24:15.531619  376166 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 19:24:15.588751  376166 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:51 SystemTime:2025-12-17 19:24:15.579250725 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1217 19:24:15.588856  376166 docker.go:319] overlay module found
	I1217 19:24:15.590454  376166 out.go:99] Using the docker driver based on user configuration
	I1217 19:24:15.590487  376166 start.go:309] selected driver: docker
	I1217 19:24:15.590493  376166 start.go:927] validating driver "docker" against <nil>
	I1217 19:24:15.590580  376166 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 19:24:15.644808  376166 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:51 SystemTime:2025-12-17 19:24:15.63476145 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1217 19:24:15.645009  376166 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1217 19:24:15.645568  376166 start_flags.go:410] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I1217 19:24:15.645718  376166 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1217 19:24:15.647461  376166 out.go:171] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-266209 host does not exist
	  To start a cluster, run: "minikube start -p download-only-266209"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.3/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.3/DeleteAll (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.3/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.34.3/DeleteAll (0.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.3/DeleteAlwaysSucceeds (0.16s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.3/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-266209
--- PASS: TestDownloadOnly/v1.34.3/DeleteAlwaysSucceeds (0.16s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-rc.1/json-events (2.82s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-rc.1/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-371882 --force --alsologtostderr --kubernetes-version=v1.35.0-rc.1 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-371882 --force --alsologtostderr --kubernetes-version=v1.35.0-rc.1 --container-runtime=crio --driver=docker  --container-runtime=crio: (2.819938329s)
--- PASS: TestDownloadOnly/v1.35.0-rc.1/json-events (2.82s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-rc.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-rc.1/preload-exists
I1217 19:24:21.707796  375797 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime crio
I1217 19:24:21.707856  375797 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22186-372245/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.35.0-rc.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-rc.1/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-rc.1/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-371882
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-371882: exit status 85 (77.166645ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                      ARGS                                                                                      │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-096016 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio      │ download-only-096016 │ jenkins │ v1.37.0 │ 17 Dec 25 19:24 UTC │                     │
	│ delete  │ --all                                                                                                                                                                          │ minikube             │ jenkins │ v1.37.0 │ 17 Dec 25 19:24 UTC │ 17 Dec 25 19:24 UTC │
	│ delete  │ -p download-only-096016                                                                                                                                                        │ download-only-096016 │ jenkins │ v1.37.0 │ 17 Dec 25 19:24 UTC │ 17 Dec 25 19:24 UTC │
	│ start   │ -o=json --download-only -p download-only-266209 --force --alsologtostderr --kubernetes-version=v1.34.3 --container-runtime=crio --driver=docker  --container-runtime=crio      │ download-only-266209 │ jenkins │ v1.37.0 │ 17 Dec 25 19:24 UTC │                     │
	│ delete  │ --all                                                                                                                                                                          │ minikube             │ jenkins │ v1.37.0 │ 17 Dec 25 19:24 UTC │ 17 Dec 25 19:24 UTC │
	│ delete  │ -p download-only-266209                                                                                                                                                        │ download-only-266209 │ jenkins │ v1.37.0 │ 17 Dec 25 19:24 UTC │ 17 Dec 25 19:24 UTC │
	│ start   │ -o=json --download-only -p download-only-371882 --force --alsologtostderr --kubernetes-version=v1.35.0-rc.1 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-371882 │ jenkins │ v1.37.0 │ 17 Dec 25 19:24 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/17 19:24:18
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1217 19:24:18.942885  376524 out.go:360] Setting OutFile to fd 1 ...
	I1217 19:24:18.943199  376524 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 19:24:18.943209  376524 out.go:374] Setting ErrFile to fd 2...
	I1217 19:24:18.943214  376524 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 19:24:18.943430  376524 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22186-372245/.minikube/bin
	I1217 19:24:18.943895  376524 out.go:368] Setting JSON to true
	I1217 19:24:18.944878  376524 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":4010,"bootTime":1765995449,"procs":201,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1217 19:24:18.944939  376524 start.go:143] virtualization: kvm guest
	I1217 19:24:18.946799  376524 out.go:99] [download-only-371882] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1217 19:24:18.946988  376524 notify.go:221] Checking for updates...
	I1217 19:24:18.948390  376524 out.go:171] MINIKUBE_LOCATION=22186
	I1217 19:24:18.949833  376524 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 19:24:18.950990  376524 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22186-372245/kubeconfig
	I1217 19:24:18.952188  376524 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22186-372245/.minikube
	I1217 19:24:18.953297  376524 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1217 19:24:18.955693  376524 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1217 19:24:18.955963  376524 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 19:24:18.979870  376524 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1217 19:24:18.979993  376524 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 19:24:19.033396  376524 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:50 SystemTime:2025-12-17 19:24:19.023456087 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1217 19:24:19.033509  376524 docker.go:319] overlay module found
	I1217 19:24:19.035175  376524 out.go:99] Using the docker driver based on user configuration
	I1217 19:24:19.035215  376524 start.go:309] selected driver: docker
	I1217 19:24:19.035230  376524 start.go:927] validating driver "docker" against <nil>
	I1217 19:24:19.035331  376524 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 19:24:19.089680  376524 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:50 SystemTime:2025-12-17 19:24:19.079071347 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1217 19:24:19.089919  376524 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1217 19:24:19.090444  376524 start_flags.go:410] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I1217 19:24:19.090620  376524 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1217 19:24:19.092276  376524 out.go:171] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-371882 host does not exist
	  To start a cluster, run: "minikube start -p download-only-371882"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.35.0-rc.1/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-rc.1/DeleteAll (0.24s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-rc.1/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.35.0-rc.1/DeleteAll (0.24s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-rc.1/DeleteAlwaysSucceeds (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-rc.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-371882
--- PASS: TestDownloadOnly/v1.35.0-rc.1/DeleteAlwaysSucceeds (0.15s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.44s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:231: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p download-docker-902104 --alsologtostderr --driver=docker  --container-runtime=crio
helpers_test.go:176: Cleaning up "download-docker-902104" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p download-docker-902104
--- PASS: TestDownloadOnlyKic (0.44s)

                                                
                                    
x
+
TestBinaryMirror (0.84s)

                                                
                                                
=== RUN   TestBinaryMirror
I1217 19:24:23.077821  375797 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-277393 --alsologtostderr --binary-mirror http://127.0.0.1:41979 --driver=docker  --container-runtime=crio
helpers_test.go:176: Cleaning up "binary-mirror-277393" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-277393
--- PASS: TestBinaryMirror (0.84s)

                                                
                                    
x
+
TestOffline (57.01s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-299824 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-299824 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=crio: (54.33085102s)
helpers_test.go:176: Cleaning up "offline-crio-299824" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-299824
E1217 19:57:03.606420  375797 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/functional-676725/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-299824: (2.679319572s)
--- PASS: TestOffline (57.01s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1002: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-695107
addons_test.go:1002: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-695107: exit status 85 (65.785213ms)

                                                
                                                
-- stdout --
	* Profile "addons-695107" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-695107"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1013: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-695107
addons_test.go:1013: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-695107: exit status 85 (66.961539ms)

                                                
                                                
-- stdout --
	* Profile "addons-695107" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-695107"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/Setup (93.35s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-amd64 start -p addons-695107 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:110: (dbg) Done: out/minikube-linux-amd64 start -p addons-695107 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (1m33.350377577s)
--- PASS: TestAddons/Setup (93.35s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:632: (dbg) Run:  kubectl --context addons-695107 create ns new-namespace
addons_test.go:646: (dbg) Run:  kubectl --context addons-695107 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (8.44s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:677: (dbg) Run:  kubectl --context addons-695107 create -f testdata/busybox.yaml
addons_test.go:684: (dbg) Run:  kubectl --context addons-695107 create sa gcp-auth-test
addons_test.go:690: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [0895821d-164d-43f0-b04c-41cd5a505dbf] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [0895821d-164d-43f0-b04c-41cd5a505dbf] Running
addons_test.go:690: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 8.003999294s
addons_test.go:696: (dbg) Run:  kubectl --context addons-695107 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:708: (dbg) Run:  kubectl --context addons-695107 describe sa gcp-auth-test
addons_test.go:746: (dbg) Run:  kubectl --context addons-695107 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (8.44s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (16.76s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-695107
addons_test.go:174: (dbg) Done: out/minikube-linux-amd64 stop -p addons-695107: (16.457678536s)
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-695107
addons_test.go:182: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-695107
addons_test.go:187: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-695107
--- PASS: TestAddons/StoppedEnableDisable (16.76s)

                                                
                                    
x
+
TestCertOptions (23.39s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-997440 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-997440 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (20.175519515s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-997440 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-997440 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-997440 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:176: Cleaning up "cert-options-997440" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-997440
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-997440: (2.513123032s)
--- PASS: TestCertOptions (23.39s)

                                                
                                    
x
+
TestCertExpiration (218.63s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-059470 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-059470 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio: (28.588627835s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-059470 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-059470 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (6.859578169s)
helpers_test.go:176: Cleaning up "cert-expiration-059470" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-059470
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-059470: (3.175856591s)
--- PASS: TestCertExpiration (218.63s)

                                                
                                    
x
+
TestForceSystemdFlag (27.23s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-134068 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-134068 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (22.293535368s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-134068 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:176: Cleaning up "force-systemd-flag-134068" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-134068
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-134068: (4.533316849s)
--- PASS: TestForceSystemdFlag (27.23s)

                                                
                                    
x
+
TestForceSystemdEnv (37.34s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-335995 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-335995 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (34.794423957s)
helpers_test.go:176: Cleaning up "force-systemd-env-335995" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-335995
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-335995: (2.541735494s)
--- PASS: TestForceSystemdEnv (37.34s)

                                                
                                    
x
+
TestErrorSpam/setup (22.27s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-825054 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-825054 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-825054 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-825054 --driver=docker  --container-runtime=crio: (22.273769202s)
--- PASS: TestErrorSpam/setup (22.27s)

                                                
                                    
x
+
TestErrorSpam/start (0.68s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-825054 --log_dir /tmp/nospam-825054 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-825054 --log_dir /tmp/nospam-825054 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-825054 --log_dir /tmp/nospam-825054 start --dry-run
--- PASS: TestErrorSpam/start (0.68s)

                                                
                                    
x
+
TestErrorSpam/status (0.97s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-825054 --log_dir /tmp/nospam-825054 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-825054 --log_dir /tmp/nospam-825054 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-825054 --log_dir /tmp/nospam-825054 status
--- PASS: TestErrorSpam/status (0.97s)

                                                
                                    
x
+
TestErrorSpam/pause (6.55s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-825054 --log_dir /tmp/nospam-825054 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-825054 --log_dir /tmp/nospam-825054 pause: exit status 80 (2.216253252s)

                                                
                                                
-- stdout --
	* Pausing node nospam-825054 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T19:29:26Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-825054 --log_dir /tmp/nospam-825054 pause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-825054 --log_dir /tmp/nospam-825054 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-825054 --log_dir /tmp/nospam-825054 pause: exit status 80 (2.007153772s)

                                                
                                                
-- stdout --
	* Pausing node nospam-825054 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T19:29:28Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-825054 --log_dir /tmp/nospam-825054 pause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-825054 --log_dir /tmp/nospam-825054 pause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-825054 --log_dir /tmp/nospam-825054 pause: exit status 80 (2.326626819s)

                                                
                                                
-- stdout --
	* Pausing node nospam-825054 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T19:29:30Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-amd64 -p nospam-825054 --log_dir /tmp/nospam-825054 pause" failed: exit status 80
--- PASS: TestErrorSpam/pause (6.55s)

                                                
                                    
x
+
TestErrorSpam/unpause (6.14s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-825054 --log_dir /tmp/nospam-825054 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-825054 --log_dir /tmp/nospam-825054 unpause: exit status 80 (2.350071079s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-825054 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T19:29:33Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-825054 --log_dir /tmp/nospam-825054 unpause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-825054 --log_dir /tmp/nospam-825054 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-825054 --log_dir /tmp/nospam-825054 unpause: exit status 80 (1.849374231s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-825054 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T19:29:34Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-825054 --log_dir /tmp/nospam-825054 unpause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-825054 --log_dir /tmp/nospam-825054 unpause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-825054 --log_dir /tmp/nospam-825054 unpause: exit status 80 (1.937911295s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-825054 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-17T19:29:36Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-amd64 -p nospam-825054 --log_dir /tmp/nospam-825054 unpause" failed: exit status 80
--- PASS: TestErrorSpam/unpause (6.14s)

                                                
                                    
x
+
TestErrorSpam/stop (12.67s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-825054 --log_dir /tmp/nospam-825054 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-825054 --log_dir /tmp/nospam-825054 stop: (12.447954668s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-825054 --log_dir /tmp/nospam-825054 stop
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-825054 --log_dir /tmp/nospam-825054 stop
--- PASS: TestErrorSpam/stop (12.67s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/22186-372245/.minikube/files/etc/test/nested/copy/375797/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (41.75s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-amd64 start -p functional-676725 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
functional_test.go:2239: (dbg) Done: out/minikube-linux-amd64 start -p functional-676725 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (41.744752185s)
--- PASS: TestFunctional/serial/StartWithProxy (41.75s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (6.26s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1217 19:30:35.637298  375797 config.go:182] Loaded profile config "functional-676725": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-676725 --alsologtostderr -v=8
functional_test.go:674: (dbg) Done: out/minikube-linux-amd64 start -p functional-676725 --alsologtostderr -v=8: (6.263571642s)
functional_test.go:678: soft start took 6.264367297s for "functional-676725" cluster.
I1217 19:30:41.901294  375797 config.go:182] Loaded profile config "functional-676725": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
--- PASS: TestFunctional/serial/SoftStart (6.26s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-676725 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.57s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-676725 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-676725 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-676725 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.57s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (0.92s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-676725 /tmp/TestFunctionalserialCacheCmdcacheadd_local3295472026/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-amd64 -p functional-676725 cache add minikube-local-cache-test:functional-676725
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-676725 cache delete minikube-local-cache-test:functional-676725
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-676725
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (0.92s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.3s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-amd64 -p functional-676725 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.30s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.56s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-amd64 -p functional-676725 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 -p functional-676725 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-676725 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (291.721733ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-676725 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-676725 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.56s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-676725 kubectl -- --context functional-676725 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-676725 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (68.13s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-amd64 start -p functional-676725 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1217 19:30:57.996669  375797 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/addons-695107/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 19:30:58.003204  375797 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/addons-695107/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 19:30:58.014732  375797 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/addons-695107/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 19:30:58.036199  375797 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/addons-695107/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 19:30:58.077683  375797 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/addons-695107/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 19:30:58.159178  375797 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/addons-695107/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 19:30:58.320750  375797 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/addons-695107/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 19:30:58.642281  375797 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/addons-695107/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 19:30:59.284358  375797 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/addons-695107/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 19:31:00.566051  375797 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/addons-695107/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 19:31:03.129052  375797 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/addons-695107/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 19:31:08.250690  375797 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/addons-695107/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 19:31:18.492201  375797 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/addons-695107/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 19:31:38.973721  375797 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/addons-695107/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Done: out/minikube-linux-amd64 start -p functional-676725 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (1m8.129216862s)
functional_test.go:776: restart took 1m8.12934153s for "functional-676725" cluster.
I1217 19:31:56.007682  375797 config.go:182] Loaded profile config "functional-676725": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
--- PASS: TestFunctional/serial/ExtraConfig (68.13s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-676725 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.29s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-amd64 -p functional-676725 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-amd64 -p functional-676725 logs: (1.287600653s)
--- PASS: TestFunctional/serial/LogsCmd (1.29s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.32s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-amd64 -p functional-676725 logs --file /tmp/TestFunctionalserialLogsFileCmd1898474855/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-amd64 -p functional-676725 logs --file /tmp/TestFunctionalserialLogsFileCmd1898474855/001/logs.txt: (1.318544787s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.32s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.02s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-676725 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-676725
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-676725: exit status 115 (359.808297ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:31380 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-676725 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.02s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-676725 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-676725 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-676725 config get cpus: exit status 14 (105.066325ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-676725 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-676725 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-676725 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-676725 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-676725 config get cpus: exit status 14 (95.089656ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (9.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-676725 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-676725 --alsologtostderr -v=1] ...
helpers_test.go:526: unable to kill pid 414551: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (9.66s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-amd64 start -p functional-676725 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-676725 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (194.383317ms)

                                                
                                                
-- stdout --
	* [functional-676725] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22186
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22186-372245/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22186-372245/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1217 19:32:26.115125  413658 out.go:360] Setting OutFile to fd 1 ...
	I1217 19:32:26.115414  413658 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 19:32:26.115424  413658 out.go:374] Setting ErrFile to fd 2...
	I1217 19:32:26.115428  413658 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 19:32:26.115774  413658 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22186-372245/.minikube/bin
	I1217 19:32:26.116331  413658 out.go:368] Setting JSON to false
	I1217 19:32:26.117532  413658 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":4497,"bootTime":1765995449,"procs":253,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1217 19:32:26.117593  413658 start.go:143] virtualization: kvm guest
	I1217 19:32:26.119471  413658 out.go:179] * [functional-676725] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1217 19:32:26.120730  413658 out.go:179]   - MINIKUBE_LOCATION=22186
	I1217 19:32:26.120766  413658 notify.go:221] Checking for updates...
	I1217 19:32:26.123586  413658 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 19:32:26.124852  413658 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22186-372245/kubeconfig
	I1217 19:32:26.125940  413658 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22186-372245/.minikube
	I1217 19:32:26.127114  413658 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1217 19:32:26.128119  413658 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 19:32:26.129889  413658 config.go:182] Loaded profile config "functional-676725": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 19:32:26.130516  413658 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 19:32:26.156000  413658 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1217 19:32:26.156154  413658 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 19:32:26.220005  413658 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-12-17 19:32:26.20866636 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1217 19:32:26.220155  413658 docker.go:319] overlay module found
	I1217 19:32:26.224203  413658 out.go:179] * Using the docker driver based on existing profile
	I1217 19:32:26.225430  413658 start.go:309] selected driver: docker
	I1217 19:32:26.225451  413658 start.go:927] validating driver "docker" against &{Name:functional-676725 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:functional-676725 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 19:32:26.225590  413658 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 19:32:26.229904  413658 out.go:203] 
	W1217 19:32:26.231148  413658 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1217 19:32:26.232319  413658 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-amd64 start -p functional-676725 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-amd64 start -p functional-676725 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-676725 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (214.801986ms)

                                                
                                                
-- stdout --
	* [functional-676725] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22186
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22186-372245/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22186-372245/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1217 19:32:25.899523  413423 out.go:360] Setting OutFile to fd 1 ...
	I1217 19:32:25.899619  413423 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 19:32:25.899629  413423 out.go:374] Setting ErrFile to fd 2...
	I1217 19:32:25.899635  413423 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 19:32:25.900063  413423 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22186-372245/.minikube/bin
	I1217 19:32:25.900671  413423 out.go:368] Setting JSON to false
	I1217 19:32:25.902023  413423 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":4497,"bootTime":1765995449,"procs":248,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1217 19:32:25.902126  413423 start.go:143] virtualization: kvm guest
	I1217 19:32:25.904402  413423 out.go:179] * [functional-676725] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1217 19:32:25.905727  413423 out.go:179]   - MINIKUBE_LOCATION=22186
	I1217 19:32:25.905745  413423 notify.go:221] Checking for updates...
	I1217 19:32:25.908552  413423 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 19:32:25.909936  413423 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22186-372245/kubeconfig
	I1217 19:32:25.911114  413423 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22186-372245/.minikube
	I1217 19:32:25.913107  413423 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1217 19:32:25.914232  413423 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 19:32:25.918828  413423 config.go:182] Loaded profile config "functional-676725": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 19:32:25.919650  413423 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 19:32:25.952001  413423 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1217 19:32:25.952162  413423 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 19:32:26.024573  413423 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:38 OomKillDisable:false NGoroutines:56 SystemTime:2025-12-17 19:32:26.01263285 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1217 19:32:26.024695  413423 docker.go:319] overlay module found
	I1217 19:32:26.026868  413423 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1217 19:32:26.028115  413423 start.go:309] selected driver: docker
	I1217 19:32:26.028135  413423 start.go:927] validating driver "docker" against &{Name:functional-676725 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:functional-676725 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 19:32:26.028261  413423 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 19:32:26.031909  413423 out.go:203] 
	W1217 19:32:26.033370  413423 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1217 19:32:26.037199  413423 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-676725 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-amd64 -p functional-676725 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-amd64 -p functional-676725 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.01s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (6.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-676725 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-676725 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:353: "hello-node-connect-7d85dfc575-p45tb" [5ce1c1e7-a599-4446-b3a2-161bcf30752a] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
E1217 19:32:19.935905  375797 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/addons-695107/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:353: "hello-node-connect-7d85dfc575-p45tb" [5ce1c1e7-a599-4446-b3a2-161bcf30752a] Running
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 6.003736297s
functional_test.go:1654: (dbg) Run:  out/minikube-linux-amd64 -p functional-676725 service hello-node-connect --url
functional_test.go:1660: found endpoint for hello-node-connect: http://192.168.49.2:30319
functional_test.go:1680: http://192.168.49.2:30319: success! body:
Request served by hello-node-connect-7d85dfc575-p45tb

                                                
                                                
HTTP/1.1 GET /

                                                
                                                
Host: 192.168.49.2:30319
Accept-Encoding: gzip
User-Agent: Go-http-client/1.1
--- PASS: TestFunctional/parallel/ServiceCmdConnect (6.76s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-amd64 -p functional-676725 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-676725 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (18.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:353: "storage-provisioner" [eab44a3a-77f9-4f73-be3e-1e7ccf3213b2] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.004280832s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-676725 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-676725 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-676725 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-676725 apply -f testdata/storage-provisioner/pod.yaml
I1217 19:32:20.841040  375797 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:353: "sp-pod" [423268fb-2e4a-4e59-82a9-429397fcb267] Pending
helpers_test.go:353: "sp-pod" [423268fb-2e4a-4e59-82a9-429397fcb267] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 6.004525262s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-676725 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-676725 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-676725 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:353: "sp-pod" [7049a27e-a0af-4e63-91ef-4e13ffa98f3b] Pending
helpers_test.go:353: "sp-pod" [7049a27e-a0af-4e63-91ef-4e13ffa98f3b] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 6.004288516s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-676725 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (18.31s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-amd64 -p functional-676725 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-676725 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.79s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p functional-676725 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p functional-676725 ssh -n functional-676725 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p functional-676725 cp functional-676725:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd2399664557/001/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p functional-676725 ssh -n functional-676725 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p functional-676725 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p functional-676725 ssh -n functional-676725 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.10s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (21.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-676725 replace --force -f testdata/mysql.yaml
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:353: "mysql-6bcdcbc558-8c25x" [7d914761-bbc7-4365-920b-89f481da6d94] Pending
helpers_test.go:353: "mysql-6bcdcbc558-8c25x" [7d914761-bbc7-4365-920b-89f481da6d94] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:353: "mysql-6bcdcbc558-8c25x" [7d914761-bbc7-4365-920b-89f481da6d94] Running
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 15.003413424s
functional_test.go:1812: (dbg) Run:  kubectl --context functional-676725 exec mysql-6bcdcbc558-8c25x -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-676725 exec mysql-6bcdcbc558-8c25x -- mysql -ppassword -e "show databases;": exit status 1 (96.70888ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1217 19:32:18.705874  375797 retry.go:31] will retry after 1.392721119s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-676725 exec mysql-6bcdcbc558-8c25x -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-676725 exec mysql-6bcdcbc558-8c25x -- mysql -ppassword -e "show databases;": exit status 1 (103.280252ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1217 19:32:20.203010  375797 retry.go:31] will retry after 1.595107845s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-676725 exec mysql-6bcdcbc558-8c25x -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-676725 exec mysql-6bcdcbc558-8c25x -- mysql -ppassword -e "show databases;": exit status 1 (95.294478ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1217 19:32:21.894019  375797 retry.go:31] will retry after 3.360093043s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-676725 exec mysql-6bcdcbc558-8c25x -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (21.97s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/375797/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-amd64 -p functional-676725 ssh "sudo cat /etc/test/nested/copy/375797/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/375797.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-676725 ssh "sudo cat /etc/ssl/certs/375797.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/375797.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-676725 ssh "sudo cat /usr/share/ca-certificates/375797.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-676725 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3757972.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-676725 ssh "sudo cat /etc/ssl/certs/3757972.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/3757972.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-676725 ssh "sudo cat /usr/share/ca-certificates/3757972.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-676725 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.11s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-676725 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-676725 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-676725 ssh "sudo systemctl is-active docker": exit status 1 (344.334989ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-676725 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-676725 ssh "sudo systemctl is-active containerd": exit status 1 (343.753307ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.69s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-676725 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-676725 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-676725 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-676725 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-676725 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.3
registry.k8s.io/kube-proxy:v1.34.3
registry.k8s.io/kube-controller-manager:v1.34.3
registry.k8s.io/kube-apiserver:v1.34.3
registry.k8s.io/etcd:3.6.5-0
registry.k8s.io/coredns/coredns:v1.12.1
public.ecr.aws/nginx/nginx:alpine
public.ecr.aws/docker/library/mysql:8.4
localhost/minikube-local-cache-test:functional-676725
localhost/kicbase/echo-server:functional-676725
gcr.io/k8s-minikube/storage-provisioner:v5
docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88
docker.io/kindest/kindnetd:v20250512-df8de77b
docker.io/kicbase/echo-server:latest
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-676725 image ls --format short --alsologtostderr:
I1217 19:32:26.857834  414201 out.go:360] Setting OutFile to fd 1 ...
I1217 19:32:26.858199  414201 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 19:32:26.858214  414201 out.go:374] Setting ErrFile to fd 2...
I1217 19:32:26.858220  414201 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 19:32:26.858530  414201 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22186-372245/.minikube/bin
I1217 19:32:26.859358  414201 config.go:182] Loaded profile config "functional-676725": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
I1217 19:32:26.859485  414201 config.go:182] Loaded profile config "functional-676725": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
I1217 19:32:26.859944  414201 cli_runner.go:164] Run: docker container inspect functional-676725 --format={{.State.Status}}
I1217 19:32:26.881394  414201 ssh_runner.go:195] Run: systemctl --version
I1217 19:32:26.881450  414201 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-676725
I1217 19:32:26.901128  414201 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33153 SSHKeyPath:/home/jenkins/minikube-integration/22186-372245/.minikube/machines/functional-676725/id_rsa Username:docker}
I1217 19:32:27.006192  414201 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-676725 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-676725 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬───────────────────────────────────────┬───────────────┬────────┐
│                  IMAGE                  │                  TAG                  │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼───────────────────────────────────────┼───────────────┼────────┤
│ registry.k8s.io/kube-proxy              │ v1.34.3                               │ 36eef8e07bdd6 │ 73.1MB │
│ registry.k8s.io/pause                   │ 3.10.1                                │ cd073f4c5f6a8 │ 742kB  │
│ registry.k8s.io/pause                   │ latest                                │ 350b164e7ae1d │ 247kB  │
│ gcr.io/k8s-minikube/storage-provisioner │ v5                                    │ 6e38f40d628db │ 31.5MB │
│ public.ecr.aws/docker/library/mysql     │ 8.4                                   │ 20d0be4ee4524 │ 804MB  │
│ registry.k8s.io/etcd                    │ 3.6.5-0                               │ a3e246e9556e9 │ 63.6MB │
│ gcr.io/k8s-minikube/busybox             │ 1.28.4-glibc                          │ 56cc512116c8f │ 4.63MB │
│ gcr.io/k8s-minikube/busybox             │ latest                                │ beae173ccac6a │ 1.46MB │
│ localhost/my-image                      │ functional-676725                     │ a3ca7f5e8fb79 │ 1.47MB │
│ registry.k8s.io/kube-apiserver          │ v1.34.3                               │ aa27095f56193 │ 89.1MB │
│ registry.k8s.io/kube-controller-manager │ v1.34.3                               │ 5826b25d990d7 │ 76MB   │
│ registry.k8s.io/kube-scheduler          │ v1.34.3                               │ aec12dadf56dd │ 53.9MB │
│ registry.k8s.io/coredns/coredns         │ v1.12.1                               │ 52546a367cc9e │ 76.1MB │
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b                    │ 409467f978b4a │ 109MB  │
│ docker.io/kindest/kindnetd              │ v20251212-v0.29.0-alpha-105-g20ccfc88 │ 4921d7a6dffa9 │ 108MB  │
│ public.ecr.aws/nginx/nginx              │ alpine                                │ a236f84b9d5d2 │ 55.2MB │
│ registry.k8s.io/pause                   │ 3.1                                   │ da86e6ba6ca19 │ 747kB  │
│ registry.k8s.io/pause                   │ 3.3                                   │ 0184c1613d929 │ 686kB  │
│ docker.io/kicbase/echo-server           │ latest                                │ 9056ab77afb8e │ 4.94MB │
│ localhost/kicbase/echo-server           │ functional-676725                     │ 9056ab77afb8e │ 4.94MB │
│ localhost/minikube-local-cache-test     │ functional-676725                     │ a0e56b93d7f16 │ 3.33kB │
└─────────────────────────────────────────┴───────────────────────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-676725 image ls --format table --alsologtostderr:
I1217 19:32:32.157738  415787 out.go:360] Setting OutFile to fd 1 ...
I1217 19:32:32.157991  415787 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 19:32:32.157999  415787 out.go:374] Setting ErrFile to fd 2...
I1217 19:32:32.158004  415787 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 19:32:32.158232  415787 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22186-372245/.minikube/bin
I1217 19:32:32.158800  415787 config.go:182] Loaded profile config "functional-676725": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
I1217 19:32:32.158899  415787 config.go:182] Loaded profile config "functional-676725": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
I1217 19:32:32.159378  415787 cli_runner.go:164] Run: docker container inspect functional-676725 --format={{.State.Status}}
I1217 19:32:32.179836  415787 ssh_runner.go:195] Run: systemctl --version
I1217 19:32:32.179920  415787 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-676725
I1217 19:32:32.202153  415787 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33153 SSHKeyPath:/home/jenkins/minikube-integration/22186-372245/.minikube/machines/functional-676725/id_rsa Username:docker}
I1217 19:32:32.307345  415787 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (1.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-676725 image ls --format json --alsologtostderr
functional_test.go:276: (dbg) Done: out/minikube-linux-amd64 -p functional-676725 image ls --format json --alsologtostderr: (1.740269798s)
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-676725 image ls --format json --alsologtostderr:
[{"id":"409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"109379124"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"20d0be4ee45242864913b12e7dc544f29f94117c98
46c6a6b73d416670d42438","repoDigests":["public.ecr.aws/docker/library/mysql@sha256:2cd5820b9add3517ca088e314ca9e9c4f5e60fd6de7c14ea0a2b8a0523b2e036","public.ecr.aws/docker/library/mysql@sha256:5cdee9be17b6b7c804980be29d1bb0ba1536c7afaaed679fe0c1578ea0e3c233"],"repoTags":["public.ecr.aws/docker/library/mysql:8.4"],"size":"803724943"},{"id":"aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78","repoDigests":["registry.k8s.io/kube-scheduler@sha256:490ff7b484d67db4a77e8d4bba9f12da68f6a3cae8da3b977522b60c8b5092c9","registry.k8s.io/kube-scheduler@sha256:f9a9bc7948fd804ef02255fe82ac2e85d2a66534bae2fe1348c14849260a1fe2"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.3"],"size":"53853013"},{"id":"a0e56b93d7f16ba3d4324dcbe8d96e2cb8ecdfbab1c3fac5a30ea40166c45688","repoDigests":["localhost/minikube-local-cache-test@sha256:055ff83acad093729743f63e18c47408874e1b5db431217072eb010125c49131"],"repoTags":["localhost/minikube-local-cache-test:functional-676725"],"size":"3330"},{"id":"52546a367cc9e0d924aa3b19059
6a9167fa6e53245023b5b5baf0f07e5443969","repoDigests":["registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998","registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"76103547"},{"id":"4921d7a6dffa922dd679732ba4797085c4f39e9a53bee8b6fdb1d463e8571251","repoDigests":["docker.io/kindest/kindnetd@sha256:377e2e7a513148f7c942b51cd57bdce1589940df856105384ac7f753a1ab43ae","docker.io/kindest/kindnetd@sha256:7c22558dc06a570d46ea6e8a73b23cdc754eb81f7c08d3441a3171ad359ffc27"],"repoTags":["docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88"],"size":"107598204"},{"id":"2a3c14d7f8712f3f797a73fceae4ecbf36464339597b2277a2b3c541a1bdc63c","repoDigests":["docker.io/library/f5410167bdf124f5bafb7e317453fb98e576677ebc4ec7b708aa4f99db2df1a4-tmp@sha256:5401e0d722dee9f564a5e8b826894f6d957cd8e909f5e812ad38d96f31dc5290"],"repoTags":[],"size":"1466132"},{"id":"56c
c512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee","gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b"],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1462480"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4eb
f583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:716a210d31ee5e27053ea0e1a3a3deb4910791a85ba4b1120410b5a4cbcf1954","registry.k8s.io/kube-controller-manager@sha256:90ceecee64b3dac0e619928b9b21522bde1a120bb039971110ab68d830c1f1a2"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.3"],"size":"76004183"},{"id":"aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c","repoDigests":["registry.k8s.io/kube-apiserver@sha256:5af1030676ceca025742ef5e73a504d11b59be0e5551cdb8c9cf0d3c1231b460","registry.k8s.io/kube-apiserver@sha256:9b2e9bae4dc94991e51c601ba6a00369b45064243ba7822143b286edb9d41f9e"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.3"],"size":"89050097"},{"id":"36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691","repoDigests":["registry.k8s.io/kube-proxy@sha256:7298ab89a103523d02ff4f49bedf9359710af61df9
2efdc07bac873064f03ed6","registry.k8s.io/kube-proxy@sha256:aee44d152c9eaa4f3e10584e61ee501a094880168db257af1201c806982a0945"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.3"],"size":"73145241"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"742092"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{
"id":"a3ca7f5e8fb791df2bb19fa8d77429fb03f1f5186247c7a8984c0078f883c30b","repoDigests":["localhost/my-image@sha256:a4ecc6babbad5f84494a21ad88c969f559d9781710a25410db39806952fc97d2"],"repoTags":["localhost/my-image:functional-676725"],"size":"1468744"},{"id":"a236f84b9d5d27fe4bf2bab07501cccdc8e16bb38a41f83e245216bbd2b61b5c","repoDigests":["public.ecr.aws/nginx/nginx@sha256:9b0f84d48f92f2147217aec522219e9eda883a2836f1e30ab1915bd794f294ff","public.ecr.aws/nginx/nginx@sha256:ec57271c43784c07301ebcc4bf37d6011b9b9d661d0cf229f2aa199e78a7312c"],"repoTags":["public.ecr.aws/nginx/nginx:alpine"],"size":"55156597"},{"id":"a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1","repoDigests":["registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534","registry.k8s.io/etcd@sha256:28cf8781a30d69c2e3a969764548497a949a363840e1de34e014608162644778"],"repoTags":["registry.k8s.io/etcd:3.6.5-0"],"size":"63585106"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30"
,"repoDigests":["docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6","docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86","docker.io/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf","localhost/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6","localhost/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86","localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["docker.io/kicbase/echo-server:latest","localhost/kicbase/echo-server:functional-676725"],"size":"4943877"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-676725 image ls --format json --alsologtostderr:
I1217 19:32:30.434290  415196 out.go:360] Setting OutFile to fd 1 ...
I1217 19:32:30.434408  415196 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 19:32:30.434415  415196 out.go:374] Setting ErrFile to fd 2...
I1217 19:32:30.434421  415196 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 19:32:30.434673  415196 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22186-372245/.minikube/bin
I1217 19:32:30.435536  415196 config.go:182] Loaded profile config "functional-676725": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
I1217 19:32:30.435761  415196 config.go:182] Loaded profile config "functional-676725": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
I1217 19:32:30.436466  415196 cli_runner.go:164] Run: docker container inspect functional-676725 --format={{.State.Status}}
I1217 19:32:30.461630  415196 ssh_runner.go:195] Run: systemctl --version
I1217 19:32:30.461697  415196 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-676725
I1217 19:32:30.484857  415196 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33153 SSHKeyPath:/home/jenkins/minikube-integration/22186-372245/.minikube/machines/functional-676725/id_rsa Username:docker}
I1217 19:32:30.596684  415196 ssh_runner.go:195] Run: sudo crictl images --output json
I1217 19:32:32.083913  415196 ssh_runner.go:235] Completed: sudo crictl images --output json: (1.487189791s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (1.74s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-676725 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-676725 image ls --format yaml --alsologtostderr:
- id: a0e56b93d7f16ba3d4324dcbe8d96e2cb8ecdfbab1c3fac5a30ea40166c45688
repoDigests:
- localhost/minikube-local-cache-test@sha256:055ff83acad093729743f63e18c47408874e1b5db431217072eb010125c49131
repoTags:
- localhost/minikube-local-cache-test:functional-676725
size: "3330"
- id: 52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "76103547"
- id: 5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:716a210d31ee5e27053ea0e1a3a3deb4910791a85ba4b1120410b5a4cbcf1954
- registry.k8s.io/kube-controller-manager@sha256:90ceecee64b3dac0e619928b9b21522bde1a120bb039971110ab68d830c1f1a2
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.3
size: "76004183"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41
repoTags:
- registry.k8s.io/pause:3.10.1
size: "742092"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
- docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86
- docker.io/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
- localhost/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
- localhost/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86
- localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- docker.io/kicbase/echo-server:latest
- localhost/kicbase/echo-server:functional-676725
size: "4943877"
- id: 20d0be4ee45242864913b12e7dc544f29f94117c9846c6a6b73d416670d42438
repoDigests:
- public.ecr.aws/docker/library/mysql@sha256:2cd5820b9add3517ca088e314ca9e9c4f5e60fd6de7c14ea0a2b8a0523b2e036
- public.ecr.aws/docker/library/mysql@sha256:5cdee9be17b6b7c804980be29d1bb0ba1536c7afaaed679fe0c1578ea0e3c233
repoTags:
- public.ecr.aws/docker/library/mysql:8.4
size: "803724943"
- id: 36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691
repoDigests:
- registry.k8s.io/kube-proxy@sha256:7298ab89a103523d02ff4f49bedf9359710af61df92efdc07bac873064f03ed6
- registry.k8s.io/kube-proxy@sha256:aee44d152c9eaa4f3e10584e61ee501a094880168db257af1201c806982a0945
repoTags:
- registry.k8s.io/kube-proxy:v1.34.3
size: "73145241"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "109379124"
- id: a236f84b9d5d27fe4bf2bab07501cccdc8e16bb38a41f83e245216bbd2b61b5c
repoDigests:
- public.ecr.aws/nginx/nginx@sha256:9b0f84d48f92f2147217aec522219e9eda883a2836f1e30ab1915bd794f294ff
- public.ecr.aws/nginx/nginx@sha256:ec57271c43784c07301ebcc4bf37d6011b9b9d661d0cf229f2aa199e78a7312c
repoTags:
- public.ecr.aws/nginx/nginx:alpine
size: "55156597"
- id: a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1
repoDigests:
- registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534
- registry.k8s.io/etcd@sha256:28cf8781a30d69c2e3a969764548497a949a363840e1de34e014608162644778
repoTags:
- registry.k8s.io/etcd:3.6.5-0
size: "63585106"
- id: aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:5af1030676ceca025742ef5e73a504d11b59be0e5551cdb8c9cf0d3c1231b460
- registry.k8s.io/kube-apiserver@sha256:9b2e9bae4dc94991e51c601ba6a00369b45064243ba7822143b286edb9d41f9e
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.3
size: "89050097"
- id: aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:490ff7b484d67db4a77e8d4bba9f12da68f6a3cae8da3b977522b60c8b5092c9
- registry.k8s.io/kube-scheduler@sha256:f9a9bc7948fd804ef02255fe82ac2e85d2a66534bae2fe1348c14849260a1fe2
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.3
size: "53853013"
- id: 4921d7a6dffa922dd679732ba4797085c4f39e9a53bee8b6fdb1d463e8571251
repoDigests:
- docker.io/kindest/kindnetd@sha256:377e2e7a513148f7c942b51cd57bdce1589940df856105384ac7f753a1ab43ae
- docker.io/kindest/kindnetd@sha256:7c22558dc06a570d46ea6e8a73b23cdc754eb81f7c08d3441a3171ad359ffc27
repoTags:
- docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88
size: "107598204"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-676725 image ls --format yaml --alsologtostderr:
I1217 19:32:27.105223  414334 out.go:360] Setting OutFile to fd 1 ...
I1217 19:32:27.105545  414334 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 19:32:27.105552  414334 out.go:374] Setting ErrFile to fd 2...
I1217 19:32:27.105559  414334 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 19:32:27.105846  414334 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22186-372245/.minikube/bin
I1217 19:32:27.106693  414334 config.go:182] Loaded profile config "functional-676725": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
I1217 19:32:27.106829  414334 config.go:182] Loaded profile config "functional-676725": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
I1217 19:32:27.107340  414334 cli_runner.go:164] Run: docker container inspect functional-676725 --format={{.State.Status}}
I1217 19:32:27.130492  414334 ssh_runner.go:195] Run: systemctl --version
I1217 19:32:27.130578  414334 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-676725
I1217 19:32:27.152599  414334 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33153 SSHKeyPath:/home/jenkins/minikube-integration/22186-372245/.minikube/machines/functional-676725/id_rsa Username:docker}
I1217 19:32:27.256232  414334 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p functional-676725 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-676725 ssh pgrep buildkitd: exit status 1 (326.312388ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-amd64 -p functional-676725 image build -t localhost/my-image:functional-676725 testdata/build --alsologtostderr
I1217 19:32:27.632551  375797 detect.go:223] nested VM detected
functional_test.go:330: (dbg) Done: out/minikube-linux-amd64 -p functional-676725 image build -t localhost/my-image:functional-676725 testdata/build --alsologtostderr: (2.436960351s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-amd64 -p functional-676725 image build -t localhost/my-image:functional-676725 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 2a3c14d7f87
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-676725
--> a3ca7f5e8fb
Successfully tagged localhost/my-image:functional-676725
a3ca7f5e8fb791df2bb19fa8d77429fb03f1f5186247c7a8984c0078f883c30b
functional_test.go:338: (dbg) Stderr: out/minikube-linux-amd64 -p functional-676725 image build -t localhost/my-image:functional-676725 testdata/build --alsologtostderr:
I1217 19:32:27.689170  414568 out.go:360] Setting OutFile to fd 1 ...
I1217 19:32:27.689571  414568 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 19:32:27.689588  414568 out.go:374] Setting ErrFile to fd 2...
I1217 19:32:27.689595  414568 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 19:32:27.690042  414568 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22186-372245/.minikube/bin
I1217 19:32:27.691258  414568 config.go:182] Loaded profile config "functional-676725": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
I1217 19:32:27.691912  414568 config.go:182] Loaded profile config "functional-676725": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
I1217 19:32:27.692432  414568 cli_runner.go:164] Run: docker container inspect functional-676725 --format={{.State.Status}}
I1217 19:32:27.717573  414568 ssh_runner.go:195] Run: systemctl --version
I1217 19:32:27.717634  414568 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-676725
I1217 19:32:27.740809  414568 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33153 SSHKeyPath:/home/jenkins/minikube-integration/22186-372245/.minikube/machines/functional-676725/id_rsa Username:docker}
I1217 19:32:27.847292  414568 build_images.go:162] Building image from path: /tmp/build.2673616216.tar
I1217 19:32:27.847380  414568 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1217 19:32:27.860388  414568 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2673616216.tar
I1217 19:32:27.865632  414568 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2673616216.tar: stat -c "%s %y" /var/lib/minikube/build/build.2673616216.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2673616216.tar': No such file or directory
I1217 19:32:27.865672  414568 ssh_runner.go:362] scp /tmp/build.2673616216.tar --> /var/lib/minikube/build/build.2673616216.tar (3072 bytes)
I1217 19:32:27.890481  414568 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2673616216
I1217 19:32:27.901100  414568 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2673616216 -xf /var/lib/minikube/build/build.2673616216.tar
I1217 19:32:27.911146  414568 crio.go:315] Building image: /var/lib/minikube/build/build.2673616216
I1217 19:32:27.911230  414568 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-676725 /var/lib/minikube/build/build.2673616216 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I1217 19:32:30.028492  414568 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-676725 /var/lib/minikube/build/build.2673616216 --cgroup-manager=cgroupfs: (2.117225972s)
I1217 19:32:30.028573  414568 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2673616216
I1217 19:32:30.038934  414568 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2673616216.tar
I1217 19:32:30.049211  414568 build_images.go:218] Built localhost/my-image:functional-676725 from /tmp/build.2673616216.tar
I1217 19:32:30.049255  414568 build_images.go:134] succeeded building to: functional-676725
I1217 19:32:30.049263  414568 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-676725 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.06s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-676725
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (14.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-676725 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-676725 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:353: "hello-node-75c85bcc94-v2tfn" [02dc9e43-7a6e-4198-82ee-8e7d54941041] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:353: "hello-node-75c85bcc94-v2tfn" [02dc9e43-7a6e-4198-82ee-8e7d54941041] Running
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 14.004131453s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (14.22s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-676725 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-676725 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-676725 tunnel --alsologtostderr] ...
helpers_test.go:526: unable to kill pid 409774: os: process already finished
helpers_test.go:520: unable to terminate pid 409580: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-676725 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-676725 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (13.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-676725 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:353: "nginx-svc" [2e80ac03-7c7f-42c1-8504-a13db0673f04] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:353: "nginx-svc" [2e80ac03-7c7f-42c1-8504-a13db0673f04] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 13.003575667s
I1217 19:32:19.367597  375797 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (13.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (4.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-676725 image load --daemon kicbase/echo-server:functional-676725 --alsologtostderr
functional_test.go:380: (dbg) Done: out/minikube-linux-amd64 -p functional-676725 image load --daemon kicbase/echo-server:functional-676725 --alsologtostderr: (3.850509023s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-676725 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (4.10s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-676725
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-676725 image load --daemon kicbase/echo-server:functional-676725 --alsologtostderr
functional_test.go:260: (dbg) Done: out/minikube-linux-amd64 -p functional-676725 image load --daemon kicbase/echo-server:functional-676725 --alsologtostderr: (1.244298089s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-676725 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.67s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-676725 image save kicbase/echo-server:functional-676725 /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-676725 image rm kicbase/echo-server:functional-676725 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-676725 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-676725 image load /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-676725 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.62s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-676725
functional_test.go:439: (dbg) Run:  out/minikube-linux-amd64 -p functional-676725 image save --daemon kicbase/echo-server:functional-676725 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-676725
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-amd64 -p functional-676725 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-676725 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.108.60.86 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-676725 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.94s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-amd64 -p functional-676725 service list -o json
functional_test.go:1504: Took "938.204012ms" to run "out/minikube-linux-amd64 -p functional-676725 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.94s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-amd64 -p functional-676725 service --namespace=default --https --url hello-node
functional_test.go:1532: found endpoint: https://192.168.49.2:31962
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.58s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-amd64 -p functional-676725 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-amd64 -p functional-676725 service hello-node --url
functional_test.go:1575: found endpoint for hello-node: http://192.168.49.2:31962
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1330: Took "338.174901ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1344: Took "62.775452ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1381: Took "343.569876ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1394: Took "63.898272ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (7.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-676725 /tmp/TestFunctionalparallelMountCmdany-port291655639/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1765999944454125419" to /tmp/TestFunctionalparallelMountCmdany-port291655639/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1765999944454125419" to /tmp/TestFunctionalparallelMountCmdany-port291655639/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1765999944454125419" to /tmp/TestFunctionalparallelMountCmdany-port291655639/001/test-1765999944454125419
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-676725 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-676725 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (296.130074ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1217 19:32:24.750612  375797 retry.go:31] will retry after 469.245302ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-676725 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-676725 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Dec 17 19:32 created-by-test
-rw-r--r-- 1 docker docker 24 Dec 17 19:32 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Dec 17 19:32 test-1765999944454125419
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-676725 ssh cat /mount-9p/test-1765999944454125419
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-676725 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:353: "busybox-mount" [ca40123c-0494-463f-b688-6a634f1d4694] Pending
helpers_test.go:353: "busybox-mount" [ca40123c-0494-463f-b688-6a634f1d4694] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:353: "busybox-mount" [ca40123c-0494-463f-b688-6a634f1d4694] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:353: "busybox-mount" [ca40123c-0494-463f-b688-6a634f1d4694] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 4.004105685s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-676725 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-676725 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-676725 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-676725 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-676725 /tmp/TestFunctionalparallelMountCmdany-port291655639/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (7.08s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-amd64 -p functional-676725 version --short
--- PASS: TestFunctional/parallel/Version/short (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-amd64 -p functional-676725 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-676725 /tmp/TestFunctionalparallelMountCmdspecific-port2132983138/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-676725 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-676725 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (324.22428ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1217 19:32:31.857733  375797 retry.go:31] will retry after 494.91333ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-676725 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-676725 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-676725 /tmp/TestFunctionalparallelMountCmdspecific-port2132983138/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-676725 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-676725 ssh "sudo umount -f /mount-9p": exit status 1 (277.577377ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-676725 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-676725 /tmp/TestFunctionalparallelMountCmdspecific-port2132983138/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.91s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-676725 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1793597317/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-676725 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1793597317/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-676725 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1793597317/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-676725 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-676725 ssh "findmnt -T" /mount1: exit status 1 (367.67008ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1217 19:32:33.816447  375797 retry.go:31] will retry after 610.219629ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-676725 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-676725 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-676725 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-676725 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-676725 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1793597317/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-676725 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1793597317/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-676725 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1793597317/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
2025/12/17 19:32:35 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.89s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-676725
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-676725
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-676725
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/22186-372245/.minikube/files/etc/test/nested/copy/375797/hosts
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/StartWithProxy (40.5s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-amd64 start -p functional-431355 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1
functional_test.go:2239: (dbg) Done: out/minikube-linux-amd64 start -p functional-431355 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1: (40.501770747s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/StartWithProxy (40.50s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/AuditLog
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/SoftStart (6.29s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/SoftStart
I1217 19:33:19.931062  375797 config.go:182] Loaded profile config "functional-431355": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-431355 --alsologtostderr -v=8
functional_test.go:674: (dbg) Done: out/minikube-linux-amd64 start -p functional-431355 --alsologtostderr -v=8: (6.292483063s)
functional_test.go:678: soft start took 6.292871172s for "functional-431355" cluster.
I1217 19:33:26.223909  375797 config.go:182] Loaded profile config "functional-431355": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/SoftStart (6.29s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/KubectlGetPods (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-431355 get po -A
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/KubectlGetPods (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/add_remote (2.63s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-431355 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-431355 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-431355 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/add_remote (2.63s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/add_local (0.89s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-431355 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1serialCacheC746566961/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-amd64 -p functional-431355 cache add minikube-local-cache-test:functional-431355
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-431355 cache delete minikube-local-cache-test:functional-431355
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-431355
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/add_local (0.89s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/list (0.07s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/list (0.07s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/verify_cache_inside_node (0.31s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-amd64 -p functional-431355 ssh sudo crictl images
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/verify_cache_inside_node (0.31s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/cache_reload (1.6s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-amd64 -p functional-431355 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 -p functional-431355 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-431355 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (298.379106ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-431355 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-431355 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/cache_reload (1.60s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/delete (0.14s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/delete (0.14s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/MinikubeKubectlCmd (0.13s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-431355 kubectl -- --context functional-431355 get pods
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/MinikubeKubectlCmd (0.13s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-431355 get pods
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ExtraConfig (46.28s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-amd64 start -p functional-431355 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1217 19:33:41.857812  375797 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/addons-695107/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Done: out/minikube-linux-amd64 start -p functional-431355 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (46.278845215s)
functional_test.go:776: restart took 46.279019819s for "functional-431355" cluster.
I1217 19:34:18.583120  375797 config.go:182] Loaded profile config "functional-431355": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ExtraConfig (46.28s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-431355 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/LogsCmd (1.27s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-amd64 -p functional-431355 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-amd64 -p functional-431355 logs: (1.265580488s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/LogsCmd (1.27s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/LogsFileCmd (1.29s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-amd64 -p functional-431355 logs --file /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1serialLogsFi2884733210/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-amd64 -p functional-431355 logs --file /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1serialLogsFi2884733210/001/logs.txt: (1.284544324s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/LogsFileCmd (1.29s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/InvalidService (4.12s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-431355 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-431355
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-431355: exit status 115 (369.735765ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:31379 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-431355 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/InvalidService (4.12s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ConfigCmd (0.5s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ConfigCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-431355 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-431355 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-431355 config get cpus: exit status 14 (97.759648ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-431355 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-431355 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-431355 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-431355 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-431355 config get cpus: exit status 14 (81.54525ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ConfigCmd (0.50s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DashboardCmd (15s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DashboardCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-431355 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-431355 --alsologtostderr -v=1] ...
helpers_test.go:526: unable to kill pid 429552: os: process already finished
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DashboardCmd (15.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DryRun (0.43s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DryRun
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-amd64 start -p functional-431355 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-431355 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1: exit status 23 (177.656366ms)

                                                
                                                
-- stdout --
	* [functional-431355] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22186
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22186-372245/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22186-372245/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1217 19:34:28.721641  428912 out.go:360] Setting OutFile to fd 1 ...
	I1217 19:34:28.721748  428912 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 19:34:28.721759  428912 out.go:374] Setting ErrFile to fd 2...
	I1217 19:34:28.721765  428912 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 19:34:28.722031  428912 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22186-372245/.minikube/bin
	I1217 19:34:28.722552  428912 out.go:368] Setting JSON to false
	I1217 19:34:28.723578  428912 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":4620,"bootTime":1765995449,"procs":225,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1217 19:34:28.723638  428912 start.go:143] virtualization: kvm guest
	I1217 19:34:28.725435  428912 out.go:179] * [functional-431355] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1217 19:34:28.727437  428912 notify.go:221] Checking for updates...
	I1217 19:34:28.727558  428912 out.go:179]   - MINIKUBE_LOCATION=22186
	I1217 19:34:28.728702  428912 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 19:34:28.730067  428912 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22186-372245/kubeconfig
	I1217 19:34:28.731246  428912 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22186-372245/.minikube
	I1217 19:34:28.732337  428912 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1217 19:34:28.733475  428912 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 19:34:28.735352  428912 config.go:182] Loaded profile config "functional-431355": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1217 19:34:28.736219  428912 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 19:34:28.764303  428912 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1217 19:34:28.764436  428912 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 19:34:28.824527  428912 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-12-17 19:34:28.813276008 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1217 19:34:28.824638  428912 docker.go:319] overlay module found
	I1217 19:34:28.826382  428912 out.go:179] * Using the docker driver based on existing profile
	I1217 19:34:28.827650  428912 start.go:309] selected driver: docker
	I1217 19:34:28.827672  428912 start.go:927] validating driver "docker" against &{Name:functional-431355 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-431355 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 19:34:28.827852  428912 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 19:34:28.829709  428912 out.go:203] 
	W1217 19:34:28.830969  428912 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1217 19:34:28.832264  428912 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-amd64 start -p functional-431355 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DryRun (0.43s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/InternationalLanguage (0.19s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/InternationalLanguage
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-amd64 start -p functional-431355 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-431355 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1: exit status 23 (191.763859ms)

                                                
                                                
-- stdout --
	* [functional-431355] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22186
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22186-372245/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22186-372245/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1217 19:34:28.228576  428537 out.go:360] Setting OutFile to fd 1 ...
	I1217 19:34:28.228673  428537 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 19:34:28.228678  428537 out.go:374] Setting ErrFile to fd 2...
	I1217 19:34:28.228682  428537 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 19:34:28.228973  428537 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22186-372245/.minikube/bin
	I1217 19:34:28.229499  428537 out.go:368] Setting JSON to false
	I1217 19:34:28.230497  428537 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":4619,"bootTime":1765995449,"procs":223,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1217 19:34:28.230566  428537 start.go:143] virtualization: kvm guest
	I1217 19:34:28.232484  428537 out.go:179] * [functional-431355] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1217 19:34:28.233589  428537 out.go:179]   - MINIKUBE_LOCATION=22186
	I1217 19:34:28.233642  428537 notify.go:221] Checking for updates...
	I1217 19:34:28.235611  428537 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 19:34:28.236820  428537 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22186-372245/kubeconfig
	I1217 19:34:28.237947  428537 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22186-372245/.minikube
	I1217 19:34:28.239328  428537 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1217 19:34:28.240527  428537 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 19:34:28.242219  428537 config.go:182] Loaded profile config "functional-431355": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1217 19:34:28.243048  428537 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 19:34:28.270644  428537 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1217 19:34:28.270839  428537 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 19:34:28.341457  428537 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-12-17 19:34:28.32989666 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1217 19:34:28.341603  428537 docker.go:319] overlay module found
	I1217 19:34:28.343315  428537 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1217 19:34:28.344499  428537 start.go:309] selected driver: docker
	I1217 19:34:28.344520  428537 start.go:927] validating driver "docker" against &{Name:functional-431355 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-431355 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 19:34:28.344659  428537 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 19:34:28.346413  428537 out.go:203] 
	W1217 19:34:28.347697  428537 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1217 19:34:28.348941  428537 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/InternationalLanguage (0.19s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/StatusCmd (1.08s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/StatusCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-431355 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-amd64 -p functional-431355 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-amd64 -p functional-431355 status -o json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/StatusCmd (1.08s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmdConnect (9.75s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmdConnect
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-431355 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-431355 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:353: "hello-node-connect-9f67c86d4-8rnc4" [0ab9d663-64b1-49b3-8bd9-dadb8af78d65] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:353: "hello-node-connect-9f67c86d4-8rnc4" [0ab9d663-64b1-49b3-8bd9-dadb8af78d65] Running
functional_test.go:1645: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 9.003644647s
functional_test.go:1654: (dbg) Run:  out/minikube-linux-amd64 -p functional-431355 service hello-node-connect --url
functional_test.go:1660: found endpoint for hello-node-connect: http://192.168.49.2:32718
functional_test.go:1680: http://192.168.49.2:32718: success! body:
Request served by hello-node-connect-9f67c86d4-8rnc4

                                                
                                                
HTTP/1.1 GET /

                                                
                                                
Host: 192.168.49.2:32718
Accept-Encoding: gzip
User-Agent: Go-http-client/1.1
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmdConnect (9.75s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/AddonsCmd (0.22s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/AddonsCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-amd64 -p functional-431355 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-431355 addons list -o json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/AddonsCmd (0.22s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim (25.71s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:353: "storage-provisioner" [19c72ce8-c241-43cb-8def-55fb8da97220] Running
functional_test_pvc_test.go:50: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.003851203s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-431355 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-431355 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-431355 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-431355 apply -f testdata/storage-provisioner/pod.yaml
I1217 19:34:42.401510  375797 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:353: "sp-pod" [fbfc0890-8531-488f-92e5-996cc8d5e20f] Pending
helpers_test.go:353: "sp-pod" [fbfc0890-8531-488f-92e5-996cc8d5e20f] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
2025/12/17 19:34:43 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
helpers_test.go:353: "sp-pod" [fbfc0890-8531-488f-92e5-996cc8d5e20f] Running
functional_test_pvc_test.go:140: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 13.0039246s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-431355 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-431355 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-431355 apply -f testdata/storage-provisioner/pod.yaml
I1217 19:34:56.609917  375797 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:353: "sp-pod" [210e99bb-9e48-46bb-9fa6-7d1d3d3df104] Pending
helpers_test.go:353: "sp-pod" [210e99bb-9e48-46bb-9fa6-7d1d3d3df104] Running
functional_test_pvc_test.go:140: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 6.003364126s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-431355 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim (25.71s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/SSHCmd (0.61s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/SSHCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-amd64 -p functional-431355 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-431355 ssh "cat /etc/hostname"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/SSHCmd (0.61s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CpCmd (1.95s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CpCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CpCmd
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p functional-431355 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p functional-431355 ssh -n functional-431355 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p functional-431355 cp functional-431355:/home/docker/cp-test.txt /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelCpCm792198306/001/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p functional-431355 ssh -n functional-431355 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p functional-431355 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p functional-431355 ssh -n functional-431355 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CpCmd (1.95s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MySQL (22.15s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MySQL
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-431355 replace --force -f testdata/mysql.yaml
functional_test.go:1804: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:353: "mysql-7d7b65bc95-lxswl" [852076ab-df7e-4580-a58c-7b8d106ef968] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:353: "mysql-7d7b65bc95-lxswl" [852076ab-df7e-4580-a58c-7b8d106ef968] Running
functional_test.go:1804: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MySQL: app=mysql healthy within 16.004059244s
functional_test.go:1812: (dbg) Run:  kubectl --context functional-431355 exec mysql-7d7b65bc95-lxswl -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-431355 exec mysql-7d7b65bc95-lxswl -- mysql -ppassword -e "show databases;": exit status 1 (126.674862ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1217 19:34:45.414019  375797 retry.go:31] will retry after 828.18472ms: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-431355 exec mysql-7d7b65bc95-lxswl -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-431355 exec mysql-7d7b65bc95-lxswl -- mysql -ppassword -e "show databases;": exit status 1 (106.739697ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1217 19:34:46.349690  375797 retry.go:31] will retry after 1.586634708s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-431355 exec mysql-7d7b65bc95-lxswl -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-431355 exec mysql-7d7b65bc95-lxswl -- mysql -ppassword -e "show databases;": exit status 1 (96.66597ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1217 19:34:48.033445  375797 retry.go:31] will retry after 3.106186175s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-431355 exec mysql-7d7b65bc95-lxswl -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MySQL (22.15s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/FileSync (0.31s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/FileSync
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/375797/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-amd64 -p functional-431355 ssh "sudo cat /etc/test/nested/copy/375797/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/FileSync (0.31s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CertSync (1.88s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CertSync
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/375797.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-431355 ssh "sudo cat /etc/ssl/certs/375797.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/375797.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-431355 ssh "sudo cat /usr/share/ca-certificates/375797.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-431355 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3757972.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-431355 ssh "sudo cat /etc/ssl/certs/3757972.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/3757972.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-431355 ssh "sudo cat /usr/share/ca-certificates/3757972.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-431355 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CertSync (1.88s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NodeLabels
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-431355 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NonActiveRuntimeDisabled (0.64s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-431355 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-431355 ssh "sudo systemctl is-active docker": exit status 1 (300.175311ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-431355 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-431355 ssh "sudo systemctl is-active containerd": exit status 1 (336.439591ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NonActiveRuntimeDisabled (0.64s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/License (0.3s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/License
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/License (0.30s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/DeployApp (8.2s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-431355 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-431355 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:353: "hello-node-5758569b79-shnh2" [55cc5f61-56b9-40dd-90db-ae32e9bfa0ce] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:353: "hello-node-5758569b79-shnh2" [55cc5f61-56b9-40dd-90db-ae32e9bfa0ce] Running
functional_test.go:1460: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 8.004656969s
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/DeployApp (8.20s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_not_create (0.52s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_not_create (0.52s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/any-port (8.12s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-431355 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun1221407051/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1766000065815198608" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun1221407051/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1766000065815198608" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun1221407051/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1766000065815198608" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun1221407051/001/test-1766000065815198608
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-431355 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-431355 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (321.376403ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1217 19:34:26.136947  375797 retry.go:31] will retry after 387.122687ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-431355 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-431355 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Dec 17 19:34 created-by-test
-rw-r--r-- 1 docker docker 24 Dec 17 19:34 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Dec 17 19:34 test-1766000065815198608
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-431355 ssh cat /mount-9p/test-1766000065815198608
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-431355 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:353: "busybox-mount" [15b2f82d-3136-410e-8435-4380e405bb8d] Pending
helpers_test.go:353: "busybox-mount" [15b2f82d-3136-410e-8435-4380e405bb8d] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:353: "busybox-mount" [15b2f82d-3136-410e-8435-4380e405bb8d] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:353: "busybox-mount" [15b2f82d-3136-410e-8435-4380e405bb8d] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.004367697s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-431355 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-431355 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-431355 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-431355 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-431355 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun1221407051/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/any-port (8.12s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_list (0.47s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1330: Took "396.181241ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1344: Took "71.071057ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_list (0.47s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_json_output (0.48s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1381: Took "400.311703ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1394: Took "78.201709ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_json_output (0.48s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_changes (0.17s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-431355 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_changes (0.17s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_minikube_cluster (0.16s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-431355 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_minikube_cluster (0.16s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_clusters (0.16s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-431355 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_clusters (0.16s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/List (1.04s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-amd64 -p functional-431355 service list
functional_test.go:1469: (dbg) Done: out/minikube-linux-amd64 -p functional-431355 service list: (1.040163366s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/List (1.04s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/specific-port (2.21s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-431355 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun1672424331/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-431355 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-431355 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (346.563349ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1217 19:34:34.282763  375797 retry.go:31] will retry after 577.327655ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-431355 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-431355 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-431355 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun1672424331/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-431355 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-431355 ssh "sudo umount -f /mount-9p": exit status 1 (352.200943ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-431355 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-431355 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun1672424331/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/specific-port (2.21s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/JSONOutput (0.99s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-amd64 -p functional-431355 service list -o json
functional_test.go:1504: Took "989.772833ms" to run "out/minikube-linux-amd64 -p functional-431355 service list -o json"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/JSONOutput (0.99s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/HTTPS (0.46s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-amd64 -p functional-431355 service --namespace=default --https --url hello-node
functional_test.go:1532: found endpoint: https://192.168.49.2:32575
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/HTTPS (0.46s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/Format (0.54s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-amd64 -p functional-431355 service hello-node --url --format={{.IP}}
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/Format (0.54s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/VerifyCleanup (1.88s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-431355 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun4055758774/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-431355 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun4055758774/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-431355 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun4055758774/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-431355 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-431355 ssh "findmnt -T" /mount1: exit status 1 (456.872115ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1217 19:34:36.603158  375797 retry.go:31] will retry after 413.308202ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-431355 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-431355 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-431355 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-431355 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-431355 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun4055758774/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-431355 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun4055758774/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-431355 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun4055758774/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/VerifyCleanup (1.88s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/URL (0.45s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-amd64 -p functional-431355 service hello-node --url
functional_test.go:1575: found endpoint for hello-node: http://192.168.49.2:32575
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/URL (0.45s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/RunSecondTunnel (0.47s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-431355 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-431355 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-431355 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-431355 tunnel --alsologtostderr] ...
helpers_test.go:526: unable to kill pid 432414: os: process already finished
helpers_test.go:520: unable to terminate pid 432226: os: process already finished
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/RunSecondTunnel (0.47s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-431355 tunnel --alsologtostderr]
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/WaitService/Setup (14.26s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-431355 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:353: "nginx-svc" [3df33828-cfde-4dda-86c5-d6053816d68f] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:353: "nginx-svc" [3df33828-cfde-4dda-86c5-d6053816d68f] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 14.004002994s
I1217 19:34:53.362002  375797 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/WaitService/Setup (14.26s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/short (0.07s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/short
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-amd64 -p functional-431355 version --short
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/short (0.07s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/components (0.54s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/components
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-amd64 -p functional-431355 version -o=json --components
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/components (0.54s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListShort (0.28s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-431355 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-431355 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.35.0-rc.1
registry.k8s.io/kube-proxy:v1.35.0-rc.1
registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
registry.k8s.io/kube-apiserver:v1.35.0-rc.1
registry.k8s.io/etcd:3.6.6-0
registry.k8s.io/coredns/coredns:v1.13.1
public.ecr.aws/nginx/nginx:alpine
public.ecr.aws/docker/library/mysql:8.4
localhost/minikube-local-cache-test:functional-431355
localhost/kicbase/echo-server:functional-431355
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88
docker.io/kindest/kindnetd:v20250512-df8de77b
docker.io/kicbase/echo-server:latest
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-431355 image ls --format short --alsologtostderr:
I1217 19:34:56.448915  434993 out.go:360] Setting OutFile to fd 1 ...
I1217 19:34:56.449048  434993 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 19:34:56.449058  434993 out.go:374] Setting ErrFile to fd 2...
I1217 19:34:56.449065  434993 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 19:34:56.449319  434993 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22186-372245/.minikube/bin
I1217 19:34:56.450092  434993 config.go:182] Loaded profile config "functional-431355": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
I1217 19:34:56.450231  434993 config.go:182] Loaded profile config "functional-431355": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
I1217 19:34:56.450886  434993 cli_runner.go:164] Run: docker container inspect functional-431355 --format={{.State.Status}}
I1217 19:34:56.473649  434993 ssh_runner.go:195] Run: systemctl --version
I1217 19:34:56.473730  434993 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-431355
I1217 19:34:56.498377  434993 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/22186-372245/.minikube/machines/functional-431355/id_rsa Username:docker}
I1217 19:34:56.608805  434993 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListShort (0.28s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListTable (0.27s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-431355 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-431355 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬───────────────────────────────────────┬───────────────┬────────┐
│                  IMAGE                  │                  TAG                  │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼───────────────────────────────────────┼───────────────┼────────┤
│ registry.k8s.io/pause                   │ latest                                │ 350b164e7ae1d │ 247kB  │
│ docker.io/kindest/kindnetd              │ v20251212-v0.29.0-alpha-105-g20ccfc88 │ 4921d7a6dffa9 │ 108MB  │
│ localhost/minikube-local-cache-test     │ functional-431355                     │ a0e56b93d7f16 │ 3.33kB │
│ docker.io/kicbase/echo-server           │ latest                                │ 9056ab77afb8e │ 4.94MB │
│ localhost/kicbase/echo-server           │ functional-431355                     │ 9056ab77afb8e │ 4.94MB │
│ public.ecr.aws/nginx/nginx              │ alpine                                │ a236f84b9d5d2 │ 55.2MB │
│ registry.k8s.io/pause                   │ 3.3                                   │ 0184c1613d929 │ 686kB  │
│ gcr.io/k8s-minikube/busybox             │ 1.28.4-glibc                          │ 56cc512116c8f │ 4.63MB │
│ gcr.io/k8s-minikube/storage-provisioner │ v5                                    │ 6e38f40d628db │ 31.5MB │
│ public.ecr.aws/docker/library/mysql     │ 8.4                                   │ 20d0be4ee4524 │ 804MB  │
│ registry.k8s.io/etcd                    │ 3.6.6-0                               │ 0a108f7189562 │ 63.6MB │
│ registry.k8s.io/kube-controller-manager │ v1.35.0-rc.1                          │ 5032a56602e1b │ 76.9MB │
│ registry.k8s.io/kube-proxy              │ v1.35.0-rc.1                          │ af0321f3a4f38 │ 72MB   │
│ registry.k8s.io/pause                   │ 3.1                                   │ da86e6ba6ca19 │ 747kB  │
│ registry.k8s.io/pause                   │ 3.10.1                                │ cd073f4c5f6a8 │ 742kB  │
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b                    │ 409467f978b4a │ 109MB  │
│ registry.k8s.io/coredns/coredns         │ v1.13.1                               │ aa5e3ebc0dfed │ 79.2MB │
│ registry.k8s.io/kube-apiserver          │ v1.35.0-rc.1                          │ 58865405a13bc │ 90.8MB │
│ registry.k8s.io/kube-scheduler          │ v1.35.0-rc.1                          │ 73f80cdc073da │ 52.8MB │
└─────────────────────────────────────────┴───────────────────────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-431355 image ls --format table --alsologtostderr:
I1217 19:34:56.725225  435214 out.go:360] Setting OutFile to fd 1 ...
I1217 19:34:56.725332  435214 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 19:34:56.725343  435214 out.go:374] Setting ErrFile to fd 2...
I1217 19:34:56.725349  435214 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 19:34:56.725560  435214 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22186-372245/.minikube/bin
I1217 19:34:56.726194  435214 config.go:182] Loaded profile config "functional-431355": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
I1217 19:34:56.726306  435214 config.go:182] Loaded profile config "functional-431355": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
I1217 19:34:56.726801  435214 cli_runner.go:164] Run: docker container inspect functional-431355 --format={{.State.Status}}
I1217 19:34:56.748314  435214 ssh_runner.go:195] Run: systemctl --version
I1217 19:34:56.748369  435214 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-431355
I1217 19:34:56.770682  435214 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/22186-372245/.minikube/machines/functional-431355/id_rsa Username:docker}
I1217 19:34:56.877015  435214 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListTable (0.27s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListJson (0.27s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-431355 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-431355 image ls --format json --alsologtostderr:
[{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"},{"id":"a0e56b93d7f16ba3d4324dcbe8d96e2cb8ecdfbab1c3fac5a30ea40166c45688","repoDigests":["localhost/minikube-local-cache-test@sha256:055ff83acad093729743f63e18c47408874e1b5db431217072eb010125c49131"],"repoTags":["localhost/minikube-local-cache-test:functional-431355"],"size":"3330"},{"id":"20d0be4ee45242864913b12e7dc544f29f94117c9846c6a6b73d416670d42438","repoDigests":["public.ecr.aws/docker/library/mysql@sha256:2cd5820b9add3517ca088e314ca9e9c4f5e60fd6de7c14ea0a2b8a0523b2e036","public.ecr.aws/docker/library/mysql@sha256:5cdee9be17b6b7c804980be29d1bb0ba1536c7afaaed679fe0c1578ea0e3c233"],"repoTags":["public.ecr.aws/docker/library/mysql:8.4"],"size":"803724943"},
{"id":"aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139","repoDigests":["registry.k8s.io/coredns/coredns@sha256:246e7333fde10251c693b68f13d21d6d64c7dbad866bbfa11bd49315e3f725a7","registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6"],"repoTags":["registry.k8s.io/coredns/coredns:v1.13.1"],"size":"79193994"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029"],"repoTags":[],"size":"249229937"},{"id":"a236f84b9d5d27fe4bf2bab07501cccdc8e16bb38a41f83e245216bbd2b61b5c","repoDigests":["public.ecr.aws/nginx/nginx@sha256:9b0f84d48f92f2147217aec522219e9eda883a2836f1e30ab1915bd794f294ff","public.ecr.aws/nginx/nginx@sha256:ec57271c43784c07301ebcc4bf37d6011b9b9d661d0cf229f2aa199e78a7312c"],"repoTags":["public.e
cr.aws/nginx/nginx:alpine"],"size":"55156597"},{"id":"5032a56602e1b9bd8856699701b6148aa1b9901d05b61f893df3b57f84aca614","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:57ab0f75f58d99f4be7bff7bdda015fcbf1b7c20e58ba2722c8c39f751dc8c98","registry.k8s.io/kube-controller-manager@sha256:94b94fef358192d13794f5acd21909a3eb0b3e960ed4286ef37a437e7f9272cd"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.35.0-rc.1"],"size":"76893010"},{"id":"cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"742092"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6","docker.io/kicbase/echo-server@sha256:a
82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86","docker.io/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf","localhost/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6","localhost/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86","localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["docker.io/kicbase/echo-server:latest","localhost/kicbase/echo-server:functional-431355"],"size":"4944818"},{"id":"409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"109379124"},{"id":"4921d7a6dffa922dd679732ba4797085c4f39e9a53b
ee8b6fdb1d463e8571251","repoDigests":["docker.io/kindest/kindnetd@sha256:377e2e7a513148f7c942b51cd57bdce1589940df856105384ac7f753a1ab43ae","docker.io/kindest/kindnetd@sha256:7c22558dc06a570d46ea6e8a73b23cdc754eb81f7c08d3441a3171ad359ffc27"],"repoTags":["docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88"],"size":"107598204"},{"id":"58865405a13bccac1d74bc3f446dddd22e6ef0d7ee8b52363c86dd31838976ce","repoDigests":["registry.k8s.io/kube-apiserver@sha256:4527daf97bed5f1caff2267f9b84a6c626b82615d9ff7f933619321aebde536f","registry.k8s.io/kube-apiserver@sha256:58367b5c0428495c0c12411fa7a018f5d40fe57307b85d8935b1ed35706ff7ee"],"repoTags":["registry.k8s.io/kube-apiserver:v1.35.0-rc.1"],"size":"90844140"},{"id":"af0321f3a4f388cfb978464739c323ebf891a7b0b50cdfd7179e92f141dad42a","repoDigests":["registry.k8s.io/kube-proxy@sha256:0efaa6b2a17dbaaac351bb0f55c1a495d297d87ac86b16965ec52e835c2b48d9","registry.k8s.io/kube-proxy@sha256:bdd1fa8b53558a2e1967379a36b085c93faf15581e5fa9f212baf679d89c5bb5"],"repoTags":["re
gistry.k8s.io/kube-proxy:v1.35.0-rc.1"],"size":"71986585"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/sto
rage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2","repoDigests":["registry.k8s.io/etcd@sha256:5279f56db4f32772bb41e47ca44c553f5c87a08fdf339d74c23a4cdc3c388d6a","registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890"],"repoTags":["registry.k8s.io/etcd:3.6.6-0"],"size":"63582405"},{"id":"73f80cdc073daa4d501207f9e6dec1fa9eea5f27e8d347b8a0c4bad8811eecdc","repoDigests":["registry.k8s.io/kube-scheduler@sha256:1e2bf4dfee764cc2eb3300c543b3ce1b00ca3ffc46b93f2b7ef326fbc2385636","registry.k8s.io/kube-scheduler@sha256:8155e3db27c7081abfc8eb5da70820cfeaf0bba7449e45360e8220e670f417d3"],"repoTags":["registry.k8s.io/kube-scheduler:v1.35.0-rc.1"],"size":"52763474"},{"id":"350b164e7ae1dcdd
effadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-431355 image ls --format json --alsologtostderr:
I1217 19:34:56.725217  435213 out.go:360] Setting OutFile to fd 1 ...
I1217 19:34:56.725347  435213 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 19:34:56.725354  435213 out.go:374] Setting ErrFile to fd 2...
I1217 19:34:56.725361  435213 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 19:34:56.725686  435213 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22186-372245/.minikube/bin
I1217 19:34:56.726320  435213 config.go:182] Loaded profile config "functional-431355": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
I1217 19:34:56.726420  435213 config.go:182] Loaded profile config "functional-431355": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
I1217 19:34:56.726945  435213 cli_runner.go:164] Run: docker container inspect functional-431355 --format={{.State.Status}}
I1217 19:34:56.748523  435213 ssh_runner.go:195] Run: systemctl --version
I1217 19:34:56.748587  435213 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-431355
I1217 19:34:56.770337  435213 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/22186-372245/.minikube/machines/functional-431355/id_rsa Username:docker}
I1217 19:34:56.877044  435213 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListJson (0.27s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListYaml (0.28s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-431355 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-431355 image ls --format yaml --alsologtostderr:
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029
repoTags: []
size: "249229937"
- id: aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:246e7333fde10251c693b68f13d21d6d64c7dbad866bbfa11bd49315e3f725a7
- registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6
repoTags:
- registry.k8s.io/coredns/coredns:v1.13.1
size: "79193994"
- id: 0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2
repoDigests:
- registry.k8s.io/etcd@sha256:5279f56db4f32772bb41e47ca44c553f5c87a08fdf339d74c23a4cdc3c388d6a
- registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890
repoTags:
- registry.k8s.io/etcd:3.6.6-0
size: "63582405"
- id: 5032a56602e1b9bd8856699701b6148aa1b9901d05b61f893df3b57f84aca614
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:57ab0f75f58d99f4be7bff7bdda015fcbf1b7c20e58ba2722c8c39f751dc8c98
- registry.k8s.io/kube-controller-manager@sha256:94b94fef358192d13794f5acd21909a3eb0b3e960ed4286ef37a437e7f9272cd
repoTags:
- registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
size: "76893010"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
- docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86
- docker.io/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
- localhost/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
- localhost/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86
- localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- docker.io/kicbase/echo-server:latest
- localhost/kicbase/echo-server:functional-431355
size: "4944818"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: a0e56b93d7f16ba3d4324dcbe8d96e2cb8ecdfbab1c3fac5a30ea40166c45688
repoDigests:
- localhost/minikube-local-cache-test@sha256:055ff83acad093729743f63e18c47408874e1b5db431217072eb010125c49131
repoTags:
- localhost/minikube-local-cache-test:functional-431355
size: "3330"
- id: 58865405a13bccac1d74bc3f446dddd22e6ef0d7ee8b52363c86dd31838976ce
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:4527daf97bed5f1caff2267f9b84a6c626b82615d9ff7f933619321aebde536f
- registry.k8s.io/kube-apiserver@sha256:58367b5c0428495c0c12411fa7a018f5d40fe57307b85d8935b1ed35706ff7ee
repoTags:
- registry.k8s.io/kube-apiserver:v1.35.0-rc.1
size: "90844140"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "43824855"
- id: 20d0be4ee45242864913b12e7dc544f29f94117c9846c6a6b73d416670d42438
repoDigests:
- public.ecr.aws/docker/library/mysql@sha256:2cd5820b9add3517ca088e314ca9e9c4f5e60fd6de7c14ea0a2b8a0523b2e036
- public.ecr.aws/docker/library/mysql@sha256:5cdee9be17b6b7c804980be29d1bb0ba1536c7afaaed679fe0c1578ea0e3c233
repoTags:
- public.ecr.aws/docker/library/mysql:8.4
size: "803724943"
- id: a236f84b9d5d27fe4bf2bab07501cccdc8e16bb38a41f83e245216bbd2b61b5c
repoDigests:
- public.ecr.aws/nginx/nginx@sha256:9b0f84d48f92f2147217aec522219e9eda883a2836f1e30ab1915bd794f294ff
- public.ecr.aws/nginx/nginx@sha256:ec57271c43784c07301ebcc4bf37d6011b9b9d661d0cf229f2aa199e78a7312c
repoTags:
- public.ecr.aws/nginx/nginx:alpine
size: "55156597"
- id: af0321f3a4f388cfb978464739c323ebf891a7b0b50cdfd7179e92f141dad42a
repoDigests:
- registry.k8s.io/kube-proxy@sha256:0efaa6b2a17dbaaac351bb0f55c1a495d297d87ac86b16965ec52e835c2b48d9
- registry.k8s.io/kube-proxy@sha256:bdd1fa8b53558a2e1967379a36b085c93faf15581e5fa9f212baf679d89c5bb5
repoTags:
- registry.k8s.io/kube-proxy:v1.35.0-rc.1
size: "71986585"
- id: 409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "109379124"
- id: 4921d7a6dffa922dd679732ba4797085c4f39e9a53bee8b6fdb1d463e8571251
repoDigests:
- docker.io/kindest/kindnetd@sha256:377e2e7a513148f7c942b51cd57bdce1589940df856105384ac7f753a1ab43ae
- docker.io/kindest/kindnetd@sha256:7c22558dc06a570d46ea6e8a73b23cdc754eb81f7c08d3441a3171ad359ffc27
repoTags:
- docker.io/kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88
size: "107598204"
- id: 73f80cdc073daa4d501207f9e6dec1fa9eea5f27e8d347b8a0c4bad8811eecdc
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:1e2bf4dfee764cc2eb3300c543b3ce1b00ca3ffc46b93f2b7ef326fbc2385636
- registry.k8s.io/kube-scheduler@sha256:8155e3db27c7081abfc8eb5da70820cfeaf0bba7449e45360e8220e670f417d3
repoTags:
- registry.k8s.io/kube-scheduler:v1.35.0-rc.1
size: "52763474"
- id: cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41
repoTags:
- registry.k8s.io/pause:3.10.1
size: "742092"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-431355 image ls --format yaml --alsologtostderr:
I1217 19:34:56.449008  434994 out.go:360] Setting OutFile to fd 1 ...
I1217 19:34:56.449481  434994 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 19:34:56.449500  434994 out.go:374] Setting ErrFile to fd 2...
I1217 19:34:56.449508  434994 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 19:34:56.449840  434994 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22186-372245/.minikube/bin
I1217 19:34:56.450888  434994 config.go:182] Loaded profile config "functional-431355": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
I1217 19:34:56.451031  434994 config.go:182] Loaded profile config "functional-431355": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
I1217 19:34:56.451836  434994 cli_runner.go:164] Run: docker container inspect functional-431355 --format={{.State.Status}}
I1217 19:34:56.473660  434994 ssh_runner.go:195] Run: systemctl --version
I1217 19:34:56.473716  434994 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-431355
I1217 19:34:56.498135  434994 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/22186-372245/.minikube/machines/functional-431355/id_rsa Username:docker}
I1217 19:34:56.608771  434994 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListYaml (0.28s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageBuild (2.81s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p functional-431355 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-431355 ssh pgrep buildkitd: exit status 1 (319.100655ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-amd64 -p functional-431355 image build -t localhost/my-image:functional-431355 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-amd64 -p functional-431355 image build -t localhost/my-image:functional-431355 testdata/build --alsologtostderr: (2.242441481s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-amd64 -p functional-431355 image build -t localhost/my-image:functional-431355 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> d6c1e724eeb
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-431355
--> bac60778b9e
Successfully tagged localhost/my-image:functional-431355
bac60778b9e41223fc69936f68ef8fac9db73a865d591534b7d9a656f65e2b20
functional_test.go:338: (dbg) Stderr: out/minikube-linux-amd64 -p functional-431355 image build -t localhost/my-image:functional-431355 testdata/build --alsologtostderr:
I1217 19:34:56.767249  435234 out.go:360] Setting OutFile to fd 1 ...
I1217 19:34:56.767393  435234 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 19:34:56.767404  435234 out.go:374] Setting ErrFile to fd 2...
I1217 19:34:56.767410  435234 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 19:34:56.767707  435234 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22186-372245/.minikube/bin
I1217 19:34:56.768354  435234 config.go:182] Loaded profile config "functional-431355": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
I1217 19:34:56.769024  435234 config.go:182] Loaded profile config "functional-431355": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
I1217 19:34:56.769509  435234 cli_runner.go:164] Run: docker container inspect functional-431355 --format={{.State.Status}}
I1217 19:34:56.794593  435234 ssh_runner.go:195] Run: systemctl --version
I1217 19:34:56.794782  435234 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-431355
I1217 19:34:56.819249  435234 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/22186-372245/.minikube/machines/functional-431355/id_rsa Username:docker}
I1217 19:34:56.922275  435234 build_images.go:162] Building image from path: /tmp/build.2448051815.tar
I1217 19:34:56.922361  435234 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1217 19:34:56.932405  435234 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2448051815.tar
I1217 19:34:56.936566  435234 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2448051815.tar: stat -c "%s %y" /var/lib/minikube/build/build.2448051815.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2448051815.tar': No such file or directory
I1217 19:34:56.936601  435234 ssh_runner.go:362] scp /tmp/build.2448051815.tar --> /var/lib/minikube/build/build.2448051815.tar (3072 bytes)
I1217 19:34:56.955525  435234 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2448051815
I1217 19:34:56.964628  435234 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2448051815 -xf /var/lib/minikube/build/build.2448051815.tar
I1217 19:34:56.973643  435234 crio.go:315] Building image: /var/lib/minikube/build/build.2448051815
I1217 19:34:56.973744  435234 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-431355 /var/lib/minikube/build/build.2448051815 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I1217 19:34:58.909310  435234 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-431355 /var/lib/minikube/build/build.2448051815 --cgroup-manager=cgroupfs: (1.935525937s)
I1217 19:34:58.909402  435234 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2448051815
I1217 19:34:58.918231  435234 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2448051815.tar
I1217 19:34:58.925974  435234 build_images.go:218] Built localhost/my-image:functional-431355 from /tmp/build.2448051815.tar
I1217 19:34:58.926012  435234 build_images.go:134] succeeded building to: functional-431355
I1217 19:34:58.926019  435234 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-431355 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageBuild (2.81s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/Setup (0.17s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-431355
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/Setup (0.17s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageLoadDaemon (1.08s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-amd64 -p functional-431355 image load --daemon kicbase/echo-server:functional-431355 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-431355 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageLoadDaemon (1.08s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageReloadDaemon (0.84s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-431355 image load --daemon kicbase/echo-server:functional-431355 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-431355 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageReloadDaemon (0.84s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageTagAndLoadDaemon (1.07s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-431355
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-431355 image load --daemon kicbase/echo-server:functional-431355 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-431355 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageTagAndLoadDaemon (1.07s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-431355 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.108.45.195 is working!
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-431355 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageSaveToFile (0.39s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-431355 image save kicbase/echo-server:functional-431355 /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageSaveToFile (0.39s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageRemove (0.53s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-431355 image rm kicbase/echo-server:functional-431355 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-431355 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageRemove (0.53s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageLoadFromFile (0.65s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-431355 image load /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-431355 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageLoadFromFile (0.65s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageSaveDaemon (0.41s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-431355
functional_test.go:439: (dbg) Run:  out/minikube-linux-amd64 -p functional-431355 image save --daemon kicbase/echo-server:functional-431355 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-431355
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageSaveDaemon (0.41s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-431355
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-431355
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-431355
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (112.99s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 -p ha-284200 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
E1217 19:35:57.993584  375797 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/addons-695107/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 19:36:25.700231  375797 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/addons-695107/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 -p ha-284200 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (1m52.220179557s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-284200 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (112.99s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (5s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 -p ha-284200 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 -p ha-284200 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 -p ha-284200 kubectl -- rollout status deployment/busybox: (2.49723931s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-284200 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 -p ha-284200 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-284200 kubectl -- exec busybox-7b57f96db7-52zff -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-284200 kubectl -- exec busybox-7b57f96db7-6xd48 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-284200 kubectl -- exec busybox-7b57f96db7-75g86 -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-284200 kubectl -- exec busybox-7b57f96db7-52zff -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-284200 kubectl -- exec busybox-7b57f96db7-6xd48 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-284200 kubectl -- exec busybox-7b57f96db7-75g86 -- nslookup kubernetes.default
E1217 19:37:03.606136  375797 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/functional-676725/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 19:37:03.612510  375797 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/functional-676725/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 19:37:03.623965  375797 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/functional-676725/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 19:37:03.645922  375797 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/functional-676725/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-284200 kubectl -- exec busybox-7b57f96db7-52zff -- nslookup kubernetes.default.svc.cluster.local
E1217 19:37:03.688051  375797 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/functional-676725/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 19:37:03.769584  375797 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/functional-676725/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-284200 kubectl -- exec busybox-7b57f96db7-6xd48 -- nslookup kubernetes.default.svc.cluster.local
E1217 19:37:03.931188  375797 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/functional-676725/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-284200 kubectl -- exec busybox-7b57f96db7-75g86 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (5.00s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.14s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 -p ha-284200 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
E1217 19:37:04.253300  375797 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/functional-676725/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-284200 kubectl -- exec busybox-7b57f96db7-52zff -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-284200 kubectl -- exec busybox-7b57f96db7-52zff -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-284200 kubectl -- exec busybox-7b57f96db7-6xd48 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-284200 kubectl -- exec busybox-7b57f96db7-6xd48 -- sh -c "ping -c 1 192.168.49.1"
E1217 19:37:04.894974  375797 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/functional-676725/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-284200 kubectl -- exec busybox-7b57f96db7-75g86 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-284200 kubectl -- exec busybox-7b57f96db7-75g86 -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.14s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (24.61s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 -p ha-284200 node add --alsologtostderr -v 5
E1217 19:37:06.176824  375797 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/functional-676725/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 19:37:08.738879  375797 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/functional-676725/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 19:37:13.860307  375797 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/functional-676725/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 19:37:24.101906  375797 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/functional-676725/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 -p ha-284200 node add --alsologtostderr -v 5: (23.67940169s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-284200 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (24.61s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-284200 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.95s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.95s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (18.3s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-284200 status --output json --alsologtostderr -v 5
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-284200 cp testdata/cp-test.txt ha-284200:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-284200 ssh -n ha-284200 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-284200 cp ha-284200:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3757663345/001/cp-test_ha-284200.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-284200 ssh -n ha-284200 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-284200 cp ha-284200:/home/docker/cp-test.txt ha-284200-m02:/home/docker/cp-test_ha-284200_ha-284200-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-284200 ssh -n ha-284200 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-284200 ssh -n ha-284200-m02 "sudo cat /home/docker/cp-test_ha-284200_ha-284200-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-284200 cp ha-284200:/home/docker/cp-test.txt ha-284200-m03:/home/docker/cp-test_ha-284200_ha-284200-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-284200 ssh -n ha-284200 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-284200 ssh -n ha-284200-m03 "sudo cat /home/docker/cp-test_ha-284200_ha-284200-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-284200 cp ha-284200:/home/docker/cp-test.txt ha-284200-m04:/home/docker/cp-test_ha-284200_ha-284200-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-284200 ssh -n ha-284200 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-284200 ssh -n ha-284200-m04 "sudo cat /home/docker/cp-test_ha-284200_ha-284200-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-284200 cp testdata/cp-test.txt ha-284200-m02:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-284200 ssh -n ha-284200-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-284200 cp ha-284200-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3757663345/001/cp-test_ha-284200-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-284200 ssh -n ha-284200-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-284200 cp ha-284200-m02:/home/docker/cp-test.txt ha-284200:/home/docker/cp-test_ha-284200-m02_ha-284200.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-284200 ssh -n ha-284200-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-284200 ssh -n ha-284200 "sudo cat /home/docker/cp-test_ha-284200-m02_ha-284200.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-284200 cp ha-284200-m02:/home/docker/cp-test.txt ha-284200-m03:/home/docker/cp-test_ha-284200-m02_ha-284200-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-284200 ssh -n ha-284200-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-284200 ssh -n ha-284200-m03 "sudo cat /home/docker/cp-test_ha-284200-m02_ha-284200-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-284200 cp ha-284200-m02:/home/docker/cp-test.txt ha-284200-m04:/home/docker/cp-test_ha-284200-m02_ha-284200-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-284200 ssh -n ha-284200-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-284200 ssh -n ha-284200-m04 "sudo cat /home/docker/cp-test_ha-284200-m02_ha-284200-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-284200 cp testdata/cp-test.txt ha-284200-m03:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-284200 ssh -n ha-284200-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-284200 cp ha-284200-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3757663345/001/cp-test_ha-284200-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-284200 ssh -n ha-284200-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-284200 cp ha-284200-m03:/home/docker/cp-test.txt ha-284200:/home/docker/cp-test_ha-284200-m03_ha-284200.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-284200 ssh -n ha-284200-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-284200 ssh -n ha-284200 "sudo cat /home/docker/cp-test_ha-284200-m03_ha-284200.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-284200 cp ha-284200-m03:/home/docker/cp-test.txt ha-284200-m02:/home/docker/cp-test_ha-284200-m03_ha-284200-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-284200 ssh -n ha-284200-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-284200 ssh -n ha-284200-m02 "sudo cat /home/docker/cp-test_ha-284200-m03_ha-284200-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-284200 cp ha-284200-m03:/home/docker/cp-test.txt ha-284200-m04:/home/docker/cp-test_ha-284200-m03_ha-284200-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-284200 ssh -n ha-284200-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-284200 ssh -n ha-284200-m04 "sudo cat /home/docker/cp-test_ha-284200-m03_ha-284200-m04.txt"
E1217 19:37:44.584145  375797 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/functional-676725/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-284200 cp testdata/cp-test.txt ha-284200-m04:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-284200 ssh -n ha-284200-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-284200 cp ha-284200-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3757663345/001/cp-test_ha-284200-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-284200 ssh -n ha-284200-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-284200 cp ha-284200-m04:/home/docker/cp-test.txt ha-284200:/home/docker/cp-test_ha-284200-m04_ha-284200.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-284200 ssh -n ha-284200-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-284200 ssh -n ha-284200 "sudo cat /home/docker/cp-test_ha-284200-m04_ha-284200.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-284200 cp ha-284200-m04:/home/docker/cp-test.txt ha-284200-m02:/home/docker/cp-test_ha-284200-m04_ha-284200-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-284200 ssh -n ha-284200-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-284200 ssh -n ha-284200-m02 "sudo cat /home/docker/cp-test_ha-284200-m04_ha-284200-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-284200 cp ha-284200-m04:/home/docker/cp-test.txt ha-284200-m03:/home/docker/cp-test_ha-284200-m04_ha-284200-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-284200 ssh -n ha-284200-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-284200 ssh -n ha-284200-m03 "sudo cat /home/docker/cp-test_ha-284200-m04_ha-284200-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (18.30s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (18.89s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-284200 node stop m02 --alsologtostderr -v 5
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-284200 node stop m02 --alsologtostderr -v 5: (18.169453236s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-284200 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-284200 status --alsologtostderr -v 5: exit status 7 (717.532945ms)

                                                
                                                
-- stdout --
	ha-284200
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-284200-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-284200-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-284200-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1217 19:38:07.440029  455488 out.go:360] Setting OutFile to fd 1 ...
	I1217 19:38:07.440336  455488 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 19:38:07.440348  455488 out.go:374] Setting ErrFile to fd 2...
	I1217 19:38:07.440353  455488 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 19:38:07.440610  455488 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22186-372245/.minikube/bin
	I1217 19:38:07.440851  455488 out.go:368] Setting JSON to false
	I1217 19:38:07.440893  455488 mustload.go:66] Loading cluster: ha-284200
	I1217 19:38:07.441055  455488 notify.go:221] Checking for updates...
	I1217 19:38:07.441367  455488 config.go:182] Loaded profile config "ha-284200": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 19:38:07.441386  455488 status.go:174] checking status of ha-284200 ...
	I1217 19:38:07.441846  455488 cli_runner.go:164] Run: docker container inspect ha-284200 --format={{.State.Status}}
	I1217 19:38:07.461265  455488 status.go:371] ha-284200 host status = "Running" (err=<nil>)
	I1217 19:38:07.461294  455488 host.go:66] Checking if "ha-284200" exists ...
	I1217 19:38:07.461637  455488 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-284200
	I1217 19:38:07.482172  455488 host.go:66] Checking if "ha-284200" exists ...
	I1217 19:38:07.482583  455488 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 19:38:07.482671  455488 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-284200
	I1217 19:38:07.502446  455488 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/22186-372245/.minikube/machines/ha-284200/id_rsa Username:docker}
	I1217 19:38:07.601683  455488 ssh_runner.go:195] Run: systemctl --version
	I1217 19:38:07.607973  455488 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 19:38:07.620367  455488 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 19:38:07.676157  455488 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:74 SystemTime:2025-12-17 19:38:07.665825708 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1217 19:38:07.677002  455488 kubeconfig.go:125] found "ha-284200" server: "https://192.168.49.254:8443"
	I1217 19:38:07.677039  455488 api_server.go:166] Checking apiserver status ...
	I1217 19:38:07.677114  455488 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 19:38:07.690007  455488 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1250/cgroup
	W1217 19:38:07.698586  455488 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1250/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1217 19:38:07.698650  455488 ssh_runner.go:195] Run: ls
	I1217 19:38:07.702499  455488 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1217 19:38:07.708790  455488 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1217 19:38:07.708815  455488 status.go:463] ha-284200 apiserver status = Running (err=<nil>)
	I1217 19:38:07.708826  455488 status.go:176] ha-284200 status: &{Name:ha-284200 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1217 19:38:07.708842  455488 status.go:174] checking status of ha-284200-m02 ...
	I1217 19:38:07.709136  455488 cli_runner.go:164] Run: docker container inspect ha-284200-m02 --format={{.State.Status}}
	I1217 19:38:07.727496  455488 status.go:371] ha-284200-m02 host status = "Stopped" (err=<nil>)
	I1217 19:38:07.727520  455488 status.go:384] host is not running, skipping remaining checks
	I1217 19:38:07.727528  455488 status.go:176] ha-284200-m02 status: &{Name:ha-284200-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1217 19:38:07.727555  455488 status.go:174] checking status of ha-284200-m03 ...
	I1217 19:38:07.727916  455488 cli_runner.go:164] Run: docker container inspect ha-284200-m03 --format={{.State.Status}}
	I1217 19:38:07.746276  455488 status.go:371] ha-284200-m03 host status = "Running" (err=<nil>)
	I1217 19:38:07.746303  455488 host.go:66] Checking if "ha-284200-m03" exists ...
	I1217 19:38:07.746571  455488 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-284200-m03
	I1217 19:38:07.764560  455488 host.go:66] Checking if "ha-284200-m03" exists ...
	I1217 19:38:07.764850  455488 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 19:38:07.764896  455488 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-284200-m03
	I1217 19:38:07.784483  455488 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33173 SSHKeyPath:/home/jenkins/minikube-integration/22186-372245/.minikube/machines/ha-284200-m03/id_rsa Username:docker}
	I1217 19:38:07.884767  455488 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 19:38:07.897582  455488 kubeconfig.go:125] found "ha-284200" server: "https://192.168.49.254:8443"
	I1217 19:38:07.897612  455488 api_server.go:166] Checking apiserver status ...
	I1217 19:38:07.897650  455488 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 19:38:07.909446  455488 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1187/cgroup
	W1217 19:38:07.919161  455488 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1187/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1217 19:38:07.919226  455488 ssh_runner.go:195] Run: ls
	I1217 19:38:07.923748  455488 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1217 19:38:07.928029  455488 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1217 19:38:07.928054  455488 status.go:463] ha-284200-m03 apiserver status = Running (err=<nil>)
	I1217 19:38:07.928062  455488 status.go:176] ha-284200-m03 status: &{Name:ha-284200-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1217 19:38:07.928095  455488 status.go:174] checking status of ha-284200-m04 ...
	I1217 19:38:07.928358  455488 cli_runner.go:164] Run: docker container inspect ha-284200-m04 --format={{.State.Status}}
	I1217 19:38:07.946926  455488 status.go:371] ha-284200-m04 host status = "Running" (err=<nil>)
	I1217 19:38:07.946955  455488 host.go:66] Checking if "ha-284200-m04" exists ...
	I1217 19:38:07.947250  455488 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-284200-m04
	I1217 19:38:07.964975  455488 host.go:66] Checking if "ha-284200-m04" exists ...
	I1217 19:38:07.965265  455488 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 19:38:07.965309  455488 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-284200-m04
	I1217 19:38:07.983140  455488 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33178 SSHKeyPath:/home/jenkins/minikube-integration/22186-372245/.minikube/machines/ha-284200-m04/id_rsa Username:docker}
	I1217 19:38:08.082536  455488 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 19:38:08.095225  455488 status.go:176] ha-284200-m04 status: &{Name:ha-284200-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (18.89s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.73s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.73s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (14.75s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-284200 node start m02 --alsologtostderr -v 5
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-284200 node start m02 --alsologtostderr -v 5: (13.754660329s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-284200 status --alsologtostderr -v 5
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (14.75s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.93s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.93s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (117.78s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 -p ha-284200 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 -p ha-284200 stop --alsologtostderr -v 5
E1217 19:38:25.546306  375797 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/functional-676725/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 -p ha-284200 stop --alsologtostderr -v 5: (48.312414174s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 -p ha-284200 start --wait true --alsologtostderr -v 5
E1217 19:39:25.510732  375797 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/functional-431355/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 19:39:25.517333  375797 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/functional-431355/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 19:39:25.528787  375797 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/functional-431355/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 19:39:25.550289  375797 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/functional-431355/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 19:39:25.591958  375797 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/functional-431355/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 19:39:25.673933  375797 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/functional-431355/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 19:39:25.835458  375797 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/functional-431355/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 19:39:26.157552  375797 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/functional-431355/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 19:39:26.799901  375797 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/functional-431355/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 19:39:28.081722  375797 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/functional-431355/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 19:39:30.643773  375797 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/functional-431355/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 19:39:35.765346  375797 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/functional-431355/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 19:39:46.006909  375797 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/functional-431355/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 19:39:47.468130  375797 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/functional-676725/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 19:40:06.488982  375797 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/functional-431355/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 -p ha-284200 start --wait true --alsologtostderr -v 5: (1m9.329635459s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 -p ha-284200 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (117.78s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (10.64s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-284200 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-284200 node delete m03 --alsologtostderr -v 5: (9.789439034s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-284200 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (10.64s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.72s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.72s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (43.68s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-284200 stop --alsologtostderr -v 5
E1217 19:40:47.450951  375797 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/functional-431355/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 19:40:57.993291  375797 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/addons-695107/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-284200 stop --alsologtostderr -v 5: (43.557312601s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-284200 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-284200 status --alsologtostderr -v 5: exit status 7 (125.219231ms)

                                                
                                                
-- stdout --
	ha-284200
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-284200-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-284200-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1217 19:41:17.260396  469598 out.go:360] Setting OutFile to fd 1 ...
	I1217 19:41:17.260696  469598 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 19:41:17.260707  469598 out.go:374] Setting ErrFile to fd 2...
	I1217 19:41:17.260712  469598 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 19:41:17.260928  469598 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22186-372245/.minikube/bin
	I1217 19:41:17.261136  469598 out.go:368] Setting JSON to false
	I1217 19:41:17.261175  469598 mustload.go:66] Loading cluster: ha-284200
	I1217 19:41:17.261299  469598 notify.go:221] Checking for updates...
	I1217 19:41:17.261705  469598 config.go:182] Loaded profile config "ha-284200": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 19:41:17.261728  469598 status.go:174] checking status of ha-284200 ...
	I1217 19:41:17.262361  469598 cli_runner.go:164] Run: docker container inspect ha-284200 --format={{.State.Status}}
	I1217 19:41:17.283425  469598 status.go:371] ha-284200 host status = "Stopped" (err=<nil>)
	I1217 19:41:17.283462  469598 status.go:384] host is not running, skipping remaining checks
	I1217 19:41:17.283472  469598 status.go:176] ha-284200 status: &{Name:ha-284200 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1217 19:41:17.283512  469598 status.go:174] checking status of ha-284200-m02 ...
	I1217 19:41:17.283819  469598 cli_runner.go:164] Run: docker container inspect ha-284200-m02 --format={{.State.Status}}
	I1217 19:41:17.303327  469598 status.go:371] ha-284200-m02 host status = "Stopped" (err=<nil>)
	I1217 19:41:17.303348  469598 status.go:384] host is not running, skipping remaining checks
	I1217 19:41:17.303356  469598 status.go:176] ha-284200-m02 status: &{Name:ha-284200-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1217 19:41:17.303381  469598 status.go:174] checking status of ha-284200-m04 ...
	I1217 19:41:17.303622  469598 cli_runner.go:164] Run: docker container inspect ha-284200-m04 --format={{.State.Status}}
	I1217 19:41:17.321433  469598 status.go:371] ha-284200-m04 host status = "Stopped" (err=<nil>)
	I1217 19:41:17.321460  469598 status.go:384] host is not running, skipping remaining checks
	I1217 19:41:17.321467  469598 status.go:176] ha-284200-m04 status: &{Name:ha-284200-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (43.68s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (56.99s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 -p ha-284200 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
E1217 19:42:03.606285  375797 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/functional-676725/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 19:42:09.372580  375797 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/functional-431355/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 -p ha-284200 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (56.166252166s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-284200 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (56.99s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.72s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.72s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (39.27s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 -p ha-284200 node add --control-plane --alsologtostderr -v 5
E1217 19:42:31.312663  375797 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/functional-676725/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 -p ha-284200 node add --control-plane --alsologtostderr -v 5: (38.307558094s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-284200 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (39.27s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.94s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.94s)

                                                
                                    
x
+
TestJSONOutput/start/Command (41.48s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-958146 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-958146 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio: (41.480418823s)
--- PASS: TestJSONOutput/start/Command (41.48s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (8.02s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-958146 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-958146 --output=json --user=testUser: (8.019968923s)
--- PASS: TestJSONOutput/stop/Command (8.02s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.24s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-517465 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-517465 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (81.022408ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"89278f3d-1475-44c4-8218-bde3df36ac95","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-517465] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"f3929171-6589-4b2d-bc9b-1c53f9d8b63a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=22186"}}
	{"specversion":"1.0","id":"9c404ef7-2701-4cd4-b303-3b8c914adeef","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"aa26daec-614a-47e8-96ef-9164c4d449d1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/22186-372245/kubeconfig"}}
	{"specversion":"1.0","id":"ff799a33-f01b-4672-9e42-bf18933a41d0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/22186-372245/.minikube"}}
	{"specversion":"1.0","id":"0a726a55-3312-4ee2-b866-1d12694b0383","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"f9bdef09-f717-4d5d-8bbb-3de86e551200","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"acdf947b-bdea-4fb3-8ed8-dede6b6cf435","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:176: Cleaning up "json-output-error-517465" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-517465
--- PASS: TestErrorJSONOutput (0.24s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (30.91s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-260404 --network=
E1217 19:44:25.511324  375797 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/functional-431355/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-260404 --network=: (28.737635629s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:176: Cleaning up "docker-network-260404" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-260404
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-260404: (2.155300066s)
--- PASS: TestKicCustomNetwork/create_custom_network (30.91s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (22.79s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-899410 --network=bridge
E1217 19:44:53.218622  375797 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/functional-431355/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-899410 --network=bridge: (20.758563098s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:176: Cleaning up "docker-network-899410" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-899410
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-899410: (2.014047646s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (22.79s)

                                                
                                    
x
+
TestKicExistingNetwork (23.61s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I1217 19:44:56.093061  375797 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1217 19:44:56.110692  375797 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1217 19:44:56.110770  375797 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I1217 19:44:56.110789  375797 cli_runner.go:164] Run: docker network inspect existing-network
W1217 19:44:56.128885  375797 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I1217 19:44:56.128919  375797 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I1217 19:44:56.128941  375797 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I1217 19:44:56.129057  375797 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1217 19:44:56.146837  375797 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-f64340259533 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:f6:0a:32:70:0d:35} reservation:<nil>}
I1217 19:44:56.147228  375797 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001e2af30}
I1217 19:44:56.147263  375797 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I1217 19:44:56.147305  375797 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I1217 19:44:56.194328  375797 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-amd64 start -p existing-network-361599 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-amd64 start -p existing-network-361599 --network=existing-network: (21.427473364s)
helpers_test.go:176: Cleaning up "existing-network-361599" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p existing-network-361599
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p existing-network-361599: (2.043259217s)
I1217 19:45:19.683169  375797 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (23.61s)

                                                
                                    
x
+
TestKicCustomSubnet (23.74s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-subnet-066926 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-subnet-066926 --subnet=192.168.60.0/24: (21.524646112s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-066926 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:176: Cleaning up "custom-subnet-066926" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p custom-subnet-066926
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p custom-subnet-066926: (2.200528361s)
--- PASS: TestKicCustomSubnet (23.74s)

                                                
                                    
x
+
TestKicStaticIP (25.9s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-amd64 start -p static-ip-191126 --static-ip=192.168.200.200
E1217 19:45:57.993964  375797 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/addons-695107/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-amd64 start -p static-ip-191126 --static-ip=192.168.200.200: (23.569788067s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-amd64 -p static-ip-191126 ip
helpers_test.go:176: Cleaning up "static-ip-191126" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p static-ip-191126
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p static-ip-191126: (2.173069836s)
--- PASS: TestKicStaticIP (25.90s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (48.06s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-448609 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-448609 --driver=docker  --container-runtime=crio: (21.545916722s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-450825 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-450825 --driver=docker  --container-runtime=crio: (20.411513207s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-448609
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-450825
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:176: Cleaning up "second-450825" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p second-450825
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p second-450825: (2.426063354s)
helpers_test.go:176: Cleaning up "first-448609" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p first-448609
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p first-448609: (2.385615619s)
--- PASS: TestMinikubeProfile (48.06s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (7.89s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-516360 --memory=3072 --mount-string /tmp/TestMountStartserial3269988864/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
E1217 19:47:03.610128  375797 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/functional-676725/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-516360 --memory=3072 --mount-string /tmp/TestMountStartserial3269988864/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (6.890472323s)
--- PASS: TestMountStart/serial/StartWithMountFirst (7.89s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.29s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-516360 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.29s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (4.89s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-535372 --memory=3072 --mount-string /tmp/TestMountStartserial3269988864/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-535372 --memory=3072 --mount-string /tmp/TestMountStartserial3269988864/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (3.890460731s)
--- PASS: TestMountStart/serial/StartWithMountSecond (4.89s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.29s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-535372 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.29s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.69s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-516360 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p mount-start-1-516360 --alsologtostderr -v=5: (1.689689299s)
--- PASS: TestMountStart/serial/DeleteFirst (1.69s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-535372 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.28s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.26s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-535372
mount_start_test.go:196: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-535372: (1.263262599s)
--- PASS: TestMountStart/serial/Stop (1.26s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.56s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-535372
mount_start_test.go:207: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-535372: (6.561947009s)
E1217 19:47:21.061948  375797 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/addons-695107/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestMountStart/serial/RestartStopped (7.56s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.29s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-535372 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.29s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (66.23s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-045161 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-045161 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (1m5.725718667s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-045161 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (66.23s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (4.42s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-045161 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-045161 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-045161 -- rollout status deployment/busybox: (2.549409432s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-045161 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-045161 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-045161 -- exec busybox-7b57f96db7-7k497 -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-045161 -- exec busybox-7b57f96db7-zpx7g -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-045161 -- exec busybox-7b57f96db7-7k497 -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-045161 -- exec busybox-7b57f96db7-zpx7g -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-045161 -- exec busybox-7b57f96db7-7k497 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-045161 -- exec busybox-7b57f96db7-zpx7g -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (4.42s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.77s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-045161 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-045161 -- exec busybox-7b57f96db7-7k497 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-045161 -- exec busybox-7b57f96db7-7k497 -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-045161 -- exec busybox-7b57f96db7-zpx7g -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-045161 -- exec busybox-7b57f96db7-zpx7g -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.77s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (24.95s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-045161 -v=5 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-045161 -v=5 --alsologtostderr: (24.272461083s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-045161 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (24.95s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-045161 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.07s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.69s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.69s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.39s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-045161 status --output json --alsologtostderr
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-045161 cp testdata/cp-test.txt multinode-045161:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-045161 ssh -n multinode-045161 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-045161 cp multinode-045161:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile360835547/001/cp-test_multinode-045161.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-045161 ssh -n multinode-045161 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-045161 cp multinode-045161:/home/docker/cp-test.txt multinode-045161-m02:/home/docker/cp-test_multinode-045161_multinode-045161-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-045161 ssh -n multinode-045161 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-045161 ssh -n multinode-045161-m02 "sudo cat /home/docker/cp-test_multinode-045161_multinode-045161-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-045161 cp multinode-045161:/home/docker/cp-test.txt multinode-045161-m03:/home/docker/cp-test_multinode-045161_multinode-045161-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-045161 ssh -n multinode-045161 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-045161 ssh -n multinode-045161-m03 "sudo cat /home/docker/cp-test_multinode-045161_multinode-045161-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-045161 cp testdata/cp-test.txt multinode-045161-m02:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-045161 ssh -n multinode-045161-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-045161 cp multinode-045161-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile360835547/001/cp-test_multinode-045161-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-045161 ssh -n multinode-045161-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-045161 cp multinode-045161-m02:/home/docker/cp-test.txt multinode-045161:/home/docker/cp-test_multinode-045161-m02_multinode-045161.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-045161 ssh -n multinode-045161-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-045161 ssh -n multinode-045161 "sudo cat /home/docker/cp-test_multinode-045161-m02_multinode-045161.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-045161 cp multinode-045161-m02:/home/docker/cp-test.txt multinode-045161-m03:/home/docker/cp-test_multinode-045161-m02_multinode-045161-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-045161 ssh -n multinode-045161-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-045161 ssh -n multinode-045161-m03 "sudo cat /home/docker/cp-test_multinode-045161-m02_multinode-045161-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-045161 cp testdata/cp-test.txt multinode-045161-m03:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-045161 ssh -n multinode-045161-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-045161 cp multinode-045161-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile360835547/001/cp-test_multinode-045161-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-045161 ssh -n multinode-045161-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-045161 cp multinode-045161-m03:/home/docker/cp-test.txt multinode-045161:/home/docker/cp-test_multinode-045161-m03_multinode-045161.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-045161 ssh -n multinode-045161-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-045161 ssh -n multinode-045161 "sudo cat /home/docker/cp-test_multinode-045161-m03_multinode-045161.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-045161 cp multinode-045161-m03:/home/docker/cp-test.txt multinode-045161-m02:/home/docker/cp-test_multinode-045161-m03_multinode-045161-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-045161 ssh -n multinode-045161-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-045161 ssh -n multinode-045161-m02 "sudo cat /home/docker/cp-test_multinode-045161-m03_multinode-045161-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.39s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.32s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-045161 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-045161 node stop m03: (1.287924916s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-045161 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-045161 status: exit status 7 (517.087486ms)

                                                
                                                
-- stdout --
	multinode-045161
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-045161-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-045161-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-045161 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-045161 status --alsologtostderr: exit status 7 (516.562374ms)

                                                
                                                
-- stdout --
	multinode-045161
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-045161-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-045161-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1217 19:49:13.188396  529719 out.go:360] Setting OutFile to fd 1 ...
	I1217 19:49:13.188701  529719 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 19:49:13.188712  529719 out.go:374] Setting ErrFile to fd 2...
	I1217 19:49:13.188717  529719 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 19:49:13.188950  529719 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22186-372245/.minikube/bin
	I1217 19:49:13.189201  529719 out.go:368] Setting JSON to false
	I1217 19:49:13.189246  529719 mustload.go:66] Loading cluster: multinode-045161
	I1217 19:49:13.189368  529719 notify.go:221] Checking for updates...
	I1217 19:49:13.189705  529719 config.go:182] Loaded profile config "multinode-045161": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 19:49:13.189720  529719 status.go:174] checking status of multinode-045161 ...
	I1217 19:49:13.190278  529719 cli_runner.go:164] Run: docker container inspect multinode-045161 --format={{.State.Status}}
	I1217 19:49:13.210735  529719 status.go:371] multinode-045161 host status = "Running" (err=<nil>)
	I1217 19:49:13.210769  529719 host.go:66] Checking if "multinode-045161" exists ...
	I1217 19:49:13.211037  529719 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-045161
	I1217 19:49:13.229864  529719 host.go:66] Checking if "multinode-045161" exists ...
	I1217 19:49:13.230136  529719 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 19:49:13.230189  529719 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-045161
	I1217 19:49:13.248223  529719 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33283 SSHKeyPath:/home/jenkins/minikube-integration/22186-372245/.minikube/machines/multinode-045161/id_rsa Username:docker}
	I1217 19:49:13.347597  529719 ssh_runner.go:195] Run: systemctl --version
	I1217 19:49:13.353948  529719 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 19:49:13.366552  529719 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 19:49:13.423165  529719 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:false NGoroutines:64 SystemTime:2025-12-17 19:49:13.412863497 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1217 19:49:13.423715  529719 kubeconfig.go:125] found "multinode-045161" server: "https://192.168.67.2:8443"
	I1217 19:49:13.423747  529719 api_server.go:166] Checking apiserver status ...
	I1217 19:49:13.423791  529719 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 19:49:13.436469  529719 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1243/cgroup
	W1217 19:49:13.445866  529719 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1243/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1217 19:49:13.445926  529719 ssh_runner.go:195] Run: ls
	I1217 19:49:13.449915  529719 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1217 19:49:13.454150  529719 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I1217 19:49:13.454192  529719 status.go:463] multinode-045161 apiserver status = Running (err=<nil>)
	I1217 19:49:13.454206  529719 status.go:176] multinode-045161 status: &{Name:multinode-045161 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1217 19:49:13.454223  529719 status.go:174] checking status of multinode-045161-m02 ...
	I1217 19:49:13.454506  529719 cli_runner.go:164] Run: docker container inspect multinode-045161-m02 --format={{.State.Status}}
	I1217 19:49:13.473798  529719 status.go:371] multinode-045161-m02 host status = "Running" (err=<nil>)
	I1217 19:49:13.473823  529719 host.go:66] Checking if "multinode-045161-m02" exists ...
	I1217 19:49:13.474116  529719 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-045161-m02
	I1217 19:49:13.491542  529719 host.go:66] Checking if "multinode-045161-m02" exists ...
	I1217 19:49:13.491831  529719 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 19:49:13.491881  529719 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-045161-m02
	I1217 19:49:13.510403  529719 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33288 SSHKeyPath:/home/jenkins/minikube-integration/22186-372245/.minikube/machines/multinode-045161-m02/id_rsa Username:docker}
	I1217 19:49:13.609687  529719 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 19:49:13.622242  529719 status.go:176] multinode-045161-m02 status: &{Name:multinode-045161-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1217 19:49:13.622274  529719 status.go:174] checking status of multinode-045161-m03 ...
	I1217 19:49:13.622525  529719 cli_runner.go:164] Run: docker container inspect multinode-045161-m03 --format={{.State.Status}}
	I1217 19:49:13.640558  529719 status.go:371] multinode-045161-m03 host status = "Stopped" (err=<nil>)
	I1217 19:49:13.640583  529719 status.go:384] host is not running, skipping remaining checks
	I1217 19:49:13.640589  529719 status.go:176] multinode-045161-m03 status: &{Name:multinode-045161-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.32s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (7.29s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-045161 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-045161 node start m03 -v=5 --alsologtostderr: (6.562467623s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-045161 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (7.29s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (78.72s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-045161
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-045161
E1217 19:49:25.511129  375797 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/functional-431355/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-045161: (31.387919353s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-045161 --wait=true -v=5 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-045161 --wait=true -v=5 --alsologtostderr: (47.204741833s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-045161
--- PASS: TestMultiNode/serial/RestartKeepsNodes (78.72s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.32s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-045161 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-045161 node delete m03: (4.70151876s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-045161 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.32s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (30.41s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-045161 stop
E1217 19:50:57.992348  375797 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/addons-695107/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-045161 stop: (30.209797937s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-045161 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-045161 status: exit status 7 (103.020103ms)

                                                
                                                
-- stdout --
	multinode-045161
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-045161-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-045161 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-045161 status --alsologtostderr: exit status 7 (101.01082ms)

                                                
                                                
-- stdout --
	multinode-045161
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-045161-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1217 19:51:15.350641  539549 out.go:360] Setting OutFile to fd 1 ...
	I1217 19:51:15.350923  539549 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 19:51:15.350933  539549 out.go:374] Setting ErrFile to fd 2...
	I1217 19:51:15.350937  539549 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 19:51:15.351211  539549 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22186-372245/.minikube/bin
	I1217 19:51:15.351438  539549 out.go:368] Setting JSON to false
	I1217 19:51:15.351482  539549 mustload.go:66] Loading cluster: multinode-045161
	I1217 19:51:15.351552  539549 notify.go:221] Checking for updates...
	I1217 19:51:15.351932  539549 config.go:182] Loaded profile config "multinode-045161": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 19:51:15.351950  539549 status.go:174] checking status of multinode-045161 ...
	I1217 19:51:15.352463  539549 cli_runner.go:164] Run: docker container inspect multinode-045161 --format={{.State.Status}}
	I1217 19:51:15.371447  539549 status.go:371] multinode-045161 host status = "Stopped" (err=<nil>)
	I1217 19:51:15.371480  539549 status.go:384] host is not running, skipping remaining checks
	I1217 19:51:15.371490  539549 status.go:176] multinode-045161 status: &{Name:multinode-045161 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1217 19:51:15.371520  539549 status.go:174] checking status of multinode-045161-m02 ...
	I1217 19:51:15.371826  539549 cli_runner.go:164] Run: docker container inspect multinode-045161-m02 --format={{.State.Status}}
	I1217 19:51:15.390034  539549 status.go:371] multinode-045161-m02 host status = "Stopped" (err=<nil>)
	I1217 19:51:15.390062  539549 status.go:384] host is not running, skipping remaining checks
	I1217 19:51:15.390071  539549 status.go:176] multinode-045161-m02 status: &{Name:multinode-045161-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (30.41s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (51.74s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-045161 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
E1217 19:52:03.606403  375797 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/functional-676725/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-045161 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (51.11478615s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-045161 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (51.74s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (25.82s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-045161
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-045161-m02 --driver=docker  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-045161-m02 --driver=docker  --container-runtime=crio: exit status 14 (86.469416ms)

                                                
                                                
-- stdout --
	* [multinode-045161-m02] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22186
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22186-372245/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22186-372245/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-045161-m02' is duplicated with machine name 'multinode-045161-m02' in profile 'multinode-045161'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-045161-m03 --driver=docker  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-045161-m03 --driver=docker  --container-runtime=crio: (22.966765638s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-045161
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-045161: exit status 80 (309.130113ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-045161 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-045161-m03 already exists in multinode-045161-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-045161-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-045161-m03: (2.393905481s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (25.82s)

                                                
                                    
x
+
TestPreload (101.83s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:41: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-817430 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio
preload_test.go:41: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-817430 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio: (45.11391732s)
preload_test.go:49: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-817430 image pull gcr.io/k8s-minikube/busybox
preload_test.go:49: (dbg) Done: out/minikube-linux-amd64 -p test-preload-817430 image pull gcr.io/k8s-minikube/busybox: (1.434927187s)
preload_test.go:55: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-817430
E1217 19:53:26.674312  375797 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/functional-676725/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:55: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-817430: (6.204194146s)
preload_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-817430 --preload=true --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
preload_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-817430 --preload=true --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (46.421710534s)
preload_test.go:68: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-817430 image list
helpers_test.go:176: Cleaning up "test-preload-817430" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-817430
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-817430: (2.407408586s)
--- PASS: TestPreload (101.83s)

                                                
                                    
x
+
TestScheduledStopUnix (98.64s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-197684 --memory=3072 --driver=docker  --container-runtime=crio
E1217 19:54:25.510680  375797 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/functional-431355/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-197684 --memory=3072 --driver=docker  --container-runtime=crio: (23.172973848s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-197684 --schedule 5m -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1217 19:54:42.247761  556692 out.go:360] Setting OutFile to fd 1 ...
	I1217 19:54:42.247892  556692 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 19:54:42.247903  556692 out.go:374] Setting ErrFile to fd 2...
	I1217 19:54:42.247907  556692 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 19:54:42.248183  556692 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22186-372245/.minikube/bin
	I1217 19:54:42.248501  556692 out.go:368] Setting JSON to false
	I1217 19:54:42.248610  556692 mustload.go:66] Loading cluster: scheduled-stop-197684
	I1217 19:54:42.248944  556692 config.go:182] Loaded profile config "scheduled-stop-197684": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 19:54:42.249037  556692 profile.go:143] Saving config to /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/scheduled-stop-197684/config.json ...
	I1217 19:54:42.249264  556692 mustload.go:66] Loading cluster: scheduled-stop-197684
	I1217 19:54:42.249380  556692 config.go:182] Loaded profile config "scheduled-stop-197684": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3

                                                
                                                
** /stderr **
scheduled_stop_test.go:204: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-197684 -n scheduled-stop-197684
scheduled_stop_test.go:172: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-197684 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1217 19:54:42.640173  556846 out.go:360] Setting OutFile to fd 1 ...
	I1217 19:54:42.640481  556846 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 19:54:42.640492  556846 out.go:374] Setting ErrFile to fd 2...
	I1217 19:54:42.640499  556846 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 19:54:42.640706  556846 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22186-372245/.minikube/bin
	I1217 19:54:42.640977  556846 out.go:368] Setting JSON to false
	I1217 19:54:42.641211  556846 daemonize_unix.go:73] killing process 556725 as it is an old scheduled stop
	I1217 19:54:42.641327  556846 mustload.go:66] Loading cluster: scheduled-stop-197684
	I1217 19:54:42.641770  556846 config.go:182] Loaded profile config "scheduled-stop-197684": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 19:54:42.641861  556846 profile.go:143] Saving config to /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/scheduled-stop-197684/config.json ...
	I1217 19:54:42.642107  556846 mustload.go:66] Loading cluster: scheduled-stop-197684
	I1217 19:54:42.642244  556846 config.go:182] Loaded profile config "scheduled-stop-197684": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
I1217 19:54:42.647136  375797 retry.go:31] will retry after 123.11µs: open /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/scheduled-stop-197684/pid: no such file or directory
I1217 19:54:42.648312  375797 retry.go:31] will retry after 175.508µs: open /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/scheduled-stop-197684/pid: no such file or directory
I1217 19:54:42.649457  375797 retry.go:31] will retry after 256.572µs: open /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/scheduled-stop-197684/pid: no such file or directory
I1217 19:54:42.650601  375797 retry.go:31] will retry after 401.563µs: open /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/scheduled-stop-197684/pid: no such file or directory
I1217 19:54:42.651772  375797 retry.go:31] will retry after 550.737µs: open /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/scheduled-stop-197684/pid: no such file or directory
I1217 19:54:42.652913  375797 retry.go:31] will retry after 417.997µs: open /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/scheduled-stop-197684/pid: no such file or directory
I1217 19:54:42.654092  375797 retry.go:31] will retry after 857.514µs: open /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/scheduled-stop-197684/pid: no such file or directory
I1217 19:54:42.655234  375797 retry.go:31] will retry after 1.969206ms: open /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/scheduled-stop-197684/pid: no such file or directory
I1217 19:54:42.657472  375797 retry.go:31] will retry after 3.584773ms: open /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/scheduled-stop-197684/pid: no such file or directory
I1217 19:54:42.661671  375797 retry.go:31] will retry after 5.71734ms: open /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/scheduled-stop-197684/pid: no such file or directory
I1217 19:54:42.667932  375797 retry.go:31] will retry after 3.354673ms: open /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/scheduled-stop-197684/pid: no such file or directory
I1217 19:54:42.672210  375797 retry.go:31] will retry after 9.152873ms: open /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/scheduled-stop-197684/pid: no such file or directory
I1217 19:54:42.682465  375797 retry.go:31] will retry after 10.74188ms: open /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/scheduled-stop-197684/pid: no such file or directory
I1217 19:54:42.693737  375797 retry.go:31] will retry after 22.69148ms: open /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/scheduled-stop-197684/pid: no such file or directory
I1217 19:54:42.717029  375797 retry.go:31] will retry after 34.371877ms: open /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/scheduled-stop-197684/pid: no such file or directory
I1217 19:54:42.752331  375797 retry.go:31] will retry after 47.068716ms: open /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/scheduled-stop-197684/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-197684 --cancel-scheduled
minikube stop output:

                                                
                                                
-- stdout --
	* All existing scheduled stops cancelled

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-197684 -n scheduled-stop-197684
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-197684
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-197684 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1217 19:55:08.592328  557542 out.go:360] Setting OutFile to fd 1 ...
	I1217 19:55:08.592594  557542 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 19:55:08.592604  557542 out.go:374] Setting ErrFile to fd 2...
	I1217 19:55:08.592611  557542 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 19:55:08.592838  557542 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22186-372245/.minikube/bin
	I1217 19:55:08.593112  557542 out.go:368] Setting JSON to false
	I1217 19:55:08.593211  557542 mustload.go:66] Loading cluster: scheduled-stop-197684
	I1217 19:55:08.593518  557542 config.go:182] Loaded profile config "scheduled-stop-197684": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 19:55:08.593602  557542 profile.go:143] Saving config to /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/scheduled-stop-197684/config.json ...
	I1217 19:55:08.593817  557542 mustload.go:66] Loading cluster: scheduled-stop-197684
	I1217 19:55:08.593943  557542 config.go:182] Loaded profile config "scheduled-stop-197684": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
E1217 19:55:48.580360  375797 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/functional-431355/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-197684
scheduled_stop_test.go:218: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-197684: exit status 7 (82.686963ms)

                                                
                                                
-- stdout --
	scheduled-stop-197684
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-197684 -n scheduled-stop-197684
scheduled_stop_test.go:189: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-197684 -n scheduled-stop-197684: exit status 7 (82.555205ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: status error: exit status 7 (may be ok)
helpers_test.go:176: Cleaning up "scheduled-stop-197684" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-197684
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p scheduled-stop-197684: (3.876839743s)
--- PASS: TestScheduledStopUnix (98.64s)

                                                
                                    
x
+
TestInsufficientStorage (9.08s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-amd64 start -p insufficient-storage-455834 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio
E1217 19:55:57.993225  375797 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/addons-695107/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p insufficient-storage-455834 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (6.553265657s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"dd99790d-a9e3-4888-819b-f11c3e7de955","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-455834] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"dca3ace2-514e-45a7-a898-2e8086f6e6fa","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=22186"}}
	{"specversion":"1.0","id":"a29192a9-d7a7-48ba-8ae1-f368b19ce886","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"7f97485c-6113-42b4-a542-df4cfe29915d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/22186-372245/kubeconfig"}}
	{"specversion":"1.0","id":"de404988-7010-4986-8b68-96be6f06efe6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/22186-372245/.minikube"}}
	{"specversion":"1.0","id":"c2497e9e-7667-4fdb-8dd0-91a6a3fec761","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"1ae8b0a2-b2be-4efa-bb98-f9ef1a9fab03","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"2c203538-5af5-4da4-8688-550e75c7c103","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"cf676586-b7ce-4141-98e3-7bfe95cbc1d9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"9e4401eb-b75b-40ea-aaea-a3a20f787af4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"6e959a93-957f-4a5b-a4a4-a24480b3f9a4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"0fa8aa59-82c7-4028-8fd5-fa6adcf1d179","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-455834\" primary control-plane node in \"insufficient-storage-455834\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"6475511c-70cc-4930-88b2-2dc9662bdfe8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.48-1765966054-22186 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"cd6ea02e-f880-4ea5-aa9d-59cf073f90b9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=3072MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"32d78f17-ddfc-43f6-82f5-890ef0c40c18","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-455834 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-455834 --output=json --layout=cluster: exit status 7 (304.776851ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-455834","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=3072MB) ...","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-455834","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1217 19:56:04.492957  560036 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-455834" does not appear in /home/jenkins/minikube-integration/22186-372245/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-455834 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-455834 --output=json --layout=cluster: exit status 7 (301.936289ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-455834","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-455834","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1217 19:56:04.796117  560148 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-455834" does not appear in /home/jenkins/minikube-integration/22186-372245/kubeconfig
	E1217 19:56:04.806786  560148 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/insufficient-storage-455834/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:176: Cleaning up "insufficient-storage-455834" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p insufficient-storage-455834
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p insufficient-storage-455834: (1.92253474s)
--- PASS: TestInsufficientStorage (9.08s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (51.91s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.35.0.3674623696 start -p running-upgrade-827750 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.35.0.3674623696 start -p running-upgrade-827750 --memory=3072 --vm-driver=docker  --container-runtime=crio: (25.121354031s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-827750 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-827750 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (22.971146842s)
helpers_test.go:176: Cleaning up "running-upgrade-827750" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-827750
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-827750: (3.083281326s)
--- PASS: TestRunningBinaryUpgrade (51.91s)

                                                
                                    
x
+
TestKubernetesUpgrade (297.79s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-322567 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-322567 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (23.736262783s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-322567
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-322567: (2.368096291s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-322567 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-322567 status --format={{.Host}}: exit status 7 (102.29067ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-322567 --memory=3072 --kubernetes-version=v1.35.0-rc.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-322567 --memory=3072 --kubernetes-version=v1.35.0-rc.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m22.663262362s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-322567 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-322567 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-322567 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 106 (110.108298ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-322567] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22186
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22186-372245/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22186-372245/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.35.0-rc.1 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-322567
	    minikube start -p kubernetes-upgrade-322567 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-3225672 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.35.0-rc.1, by running:
	    
	    minikube start -p kubernetes-upgrade-322567 --kubernetes-version=v1.35.0-rc.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-322567 --memory=3072 --kubernetes-version=v1.35.0-rc.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-322567 --memory=3072 --kubernetes-version=v1.35.0-rc.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (6.021892343s)
helpers_test.go:176: Cleaning up "kubernetes-upgrade-322567" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-322567
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-322567: (2.713175873s)
--- PASS: TestKubernetesUpgrade (297.79s)

                                                
                                    
x
+
TestMissingContainerUpgrade (66.04s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.35.0.3162340602 start -p missing-upgrade-910044 --memory=3072 --driver=docker  --container-runtime=crio
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.35.0.3162340602 start -p missing-upgrade-910044 --memory=3072 --driver=docker  --container-runtime=crio: (25.484612269s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-910044
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-910044
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-amd64 start -p missing-upgrade-910044 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-amd64 start -p missing-upgrade-910044 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (36.707011905s)
helpers_test.go:176: Cleaning up "missing-upgrade-910044" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p missing-upgrade-910044
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p missing-upgrade-910044: (2.419123696s)
--- PASS: TestMissingContainerUpgrade (66.04s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.48s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.48s)

                                                
                                    
x
+
TestPause/serial/Start (60.81s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-318455 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-318455 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (1m0.806968292s)
--- PASS: TestPause/serial/Start (60.81s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (62.91s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.35.0.700500945 start -p stopped-upgrade-321305 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.35.0.700500945 start -p stopped-upgrade-321305 --memory=3072 --vm-driver=docker  --container-runtime=crio: (45.677277787s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.35.0.700500945 -p stopped-upgrade-321305 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.35.0.700500945 -p stopped-upgrade-321305 stop: (2.045767613s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-321305 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-321305 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (15.184234254s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (62.91s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (7.93s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-318455 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-318455 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (7.907380215s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (7.93s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.29s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-321305
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-321305: (1.288591893s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.29s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.11s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-327438 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:108: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-327438 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 14 (111.104464ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-327438] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22186
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22186-372245/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22186-372245/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.11s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (24.32s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:120: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-327438 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:120: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-327438 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (23.984465438s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-327438 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (24.32s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (11.98s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:137: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-327438 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:137: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-327438 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (9.486881243s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-327438 status -o json
no_kubernetes_test.go:225: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-327438 status -o json: exit status 2 (382.615457ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-327438","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-327438
no_kubernetes_test.go:149: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-327438: (2.112200746s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (11.98s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (7.15s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:161: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-327438 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:161: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-327438 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (7.149758676s)
--- PASS: TestNoKubernetes/serial/Start (7.15s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads
no_kubernetes_test.go:89: Checking cache directory: /home/jenkins/minikube-integration/22186-372245/.minikube/cache/linux/amd64/v0.0.0
--- PASS: TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0.00s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.32s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-327438 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-327438 "sudo systemctl is-active --quiet service kubelet": exit status 1 (323.917752ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.32s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (31.62s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:194: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:194: (dbg) Done: out/minikube-linux-amd64 profile list: (16.111918116s)
no_kubernetes_test.go:204: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
no_kubernetes_test.go:204: (dbg) Done: out/minikube-linux-amd64 profile list --output=json: (15.51130103s)
--- PASS: TestNoKubernetes/serial/ProfileList (31.62s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.67s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-601560 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-601560 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (170.004555ms)

                                                
                                                
-- stdout --
	* [false-601560] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22186
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22186-372245/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22186-372245/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1217 19:58:33.979745  605862 out.go:360] Setting OutFile to fd 1 ...
	I1217 19:58:33.979896  605862 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 19:58:33.979909  605862 out.go:374] Setting ErrFile to fd 2...
	I1217 19:58:33.979915  605862 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 19:58:33.980147  605862 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22186-372245/.minikube/bin
	I1217 19:58:33.980682  605862 out.go:368] Setting JSON to false
	I1217 19:58:33.981959  605862 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":6065,"bootTime":1765995449,"procs":284,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1217 19:58:33.982031  605862 start.go:143] virtualization: kvm guest
	I1217 19:58:33.984051  605862 out.go:179] * [false-601560] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1217 19:58:33.985391  605862 out.go:179]   - MINIKUBE_LOCATION=22186
	I1217 19:58:33.985432  605862 notify.go:221] Checking for updates...
	I1217 19:58:33.987688  605862 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 19:58:33.989036  605862 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22186-372245/kubeconfig
	I1217 19:58:33.990295  605862 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22186-372245/.minikube
	I1217 19:58:33.991678  605862 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1217 19:58:33.993054  605862 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 19:58:33.995043  605862 config.go:182] Loaded profile config "NoKubernetes-327438": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I1217 19:58:33.995185  605862 config.go:182] Loaded profile config "cert-expiration-059470": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
	I1217 19:58:33.995350  605862 config.go:182] Loaded profile config "kubernetes-upgrade-322567": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.35.0-rc.1
	I1217 19:58:33.995506  605862 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 19:58:34.020693  605862 docker.go:124] docker version: linux-29.1.3:Docker Engine - Community
	I1217 19:58:34.020779  605862 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 19:58:34.078401  605862 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:74 SystemTime:2025-12-17 19:58:34.067439857 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1045-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:29.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:1c4457e00facac03ce1d75f7b6777a7a851e5c41 Expected:} RuncCommit:{ID:v1.3.4-0-gd6d73eb8 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v5.0.0] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.6] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1217 19:58:34.078509  605862 docker.go:319] overlay module found
	I1217 19:58:34.080311  605862 out.go:179] * Using the docker driver based on user configuration
	I1217 19:58:34.081663  605862 start.go:309] selected driver: docker
	I1217 19:58:34.081678  605862 start.go:927] validating driver "docker" against <nil>
	I1217 19:58:34.081691  605862 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 19:58:34.083592  605862 out.go:203] 
	W1217 19:58:34.084814  605862 out.go:285] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I1217 19:58:34.086062  605862 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-601560 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-601560

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-601560

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-601560

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-601560

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-601560

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-601560

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-601560

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-601560

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-601560

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-601560

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-601560" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-601560"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-601560" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-601560"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-601560" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-601560"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-601560

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-601560" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-601560"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-601560" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-601560"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-601560" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-601560" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-601560" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-601560" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-601560" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-601560" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-601560" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-601560" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-601560" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-601560"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-601560" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-601560"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-601560" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-601560"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-601560" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-601560"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-601560" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-601560"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-601560" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-601560" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-601560" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-601560" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-601560"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-601560" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-601560"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-601560" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-601560"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-601560" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-601560"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-601560" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-601560"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22186-372245/.minikube/ca.crt
extensions:
- extension:
last-update: Wed, 17 Dec 2025 19:57:31 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.94.2:8443
name: cert-expiration-059470
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22186-372245/.minikube/ca.crt
extensions:
- extension:
last-update: Wed, 17 Dec 2025 19:58:13 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-322567
contexts:
- context:
cluster: cert-expiration-059470
extensions:
- extension:
last-update: Wed, 17 Dec 2025 19:57:31 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: cert-expiration-059470
name: cert-expiration-059470
- context:
cluster: kubernetes-upgrade-322567
user: kubernetes-upgrade-322567
name: kubernetes-upgrade-322567
current-context: ""
kind: Config
users:
- name: cert-expiration-059470
user:
client-certificate: /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/cert-expiration-059470/client.crt
client-key: /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/cert-expiration-059470/client.key
- name: kubernetes-upgrade-322567
user:
client-certificate: /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/kubernetes-upgrade-322567/client.crt
client-key: /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/kubernetes-upgrade-322567/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-601560

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-601560" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-601560"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-601560" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-601560"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-601560" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-601560"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-601560" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-601560"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-601560" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-601560"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-601560" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-601560"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-601560" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-601560"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-601560" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-601560"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-601560" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-601560"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-601560" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-601560"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-601560" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-601560"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-601560" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-601560"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-601560" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-601560"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-601560" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-601560"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-601560" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-601560"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-601560" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-601560"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-601560" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-601560"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-601560" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-601560"

                                                
                                                
----------------------- debugLogs end: false-601560 [took: 3.319553285s] --------------------------------
helpers_test.go:176: Cleaning up "false-601560" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p false-601560
--- PASS: TestNetworkPlugins/group/false (3.67s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.3s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:183: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-327438
no_kubernetes_test.go:183: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-327438: (1.30311217s)
--- PASS: TestNoKubernetes/serial/Stop (1.30s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (6.74s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:216: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-327438 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:216: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-327438 --driver=docker  --container-runtime=crio: (6.735731644s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (6.74s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.31s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-327438 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-327438 "sudo systemctl is-active --quiet service kubelet": exit status 1 (314.026419ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.31s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (52.61s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-894575 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-894575 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (52.612115055s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (52.61s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (46.16s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-832842 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1
E1217 19:59:25.509861  375797 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/functional-431355/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-832842 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1: (46.163525201s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (46.16s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (8.22s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-832842 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [71149176-ff99-466f-92c1-b41eec28d488] Pending
helpers_test.go:353: "busybox" [71149176-ff99-466f-92c1-b41eec28d488] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [71149176-ff99-466f-92c1-b41eec28d488] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 8.003672186s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-832842 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (8.22s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (8.23s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-894575 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [333c14fc-c646-4706-87c1-b6301f91b20a] Pending
helpers_test.go:353: "busybox" [333c14fc-c646-4706-87c1-b6301f91b20a] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [333c14fc-c646-4706-87c1-b6301f91b20a] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 8.004032929s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-894575 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (8.23s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (16.29s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-832842 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-832842 --alsologtostderr -v=3: (16.285286864s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (16.29s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (15.98s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-894575 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-894575 --alsologtostderr -v=3: (15.983708161s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (15.98s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-832842 -n no-preload-832842
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-832842 -n no-preload-832842: exit status 7 (87.662623ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-832842 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (49.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-832842 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-832842 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1: (48.716195742s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-832842 -n no-preload-832842
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (49.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-894575 -n old-k8s-version-894575
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-894575 -n old-k8s-version-894575: exit status 7 (87.60596ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-894575 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (47.31s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-894575 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-894575 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (46.842071895s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-894575 -n old-k8s-version-894575
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (47.31s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (44.13s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-759234 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3
E1217 20:00:57.992413  375797 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/addons-695107/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-759234 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3: (44.126422811s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (44.13s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-b84665fb8-cfd69" [0bf2934b-4ecb-47b5-8b1a-98f9273e5bee] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003331337s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-8694d4445c-jb6px" [fc72ff5c-fb85-4431-a4f5-88e4e1f04888] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003106632s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-b84665fb8-cfd69" [0bf2934b-4ecb-47b5-8b1a-98f9273e5bee] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004257673s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-832842 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-8694d4445c-jb6px" [fc72ff5c-fb85-4431-a4f5-88e4e1f04888] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004167012s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-894575 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-832842 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-894575 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.27s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (7.31s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-759234 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [3f23f224-9b23-48f4-a957-ebc839304940] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [3f23f224-9b23-48f4-a957-ebc839304940] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 7.004319456s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-759234 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (7.31s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (24.1s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-420762 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-420762 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1: (24.100348783s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (24.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (42.9s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-147021 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-147021 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3: (42.898834261s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (42.90s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (18.4s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-759234 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-759234 --alsologtostderr -v=3: (18.397827485s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (18.40s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-759234 -n default-k8s-diff-port-759234
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-759234 -n default-k8s-diff-port-759234: exit status 7 (116.570569ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-759234 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.29s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (47.18s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-759234 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-759234 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3: (46.82902216s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-759234 -n default-k8s-diff-port-759234
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (47.18s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (18.64s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-420762 --alsologtostderr -v=3
E1217 20:02:03.605896  375797 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/functional-676725/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-420762 --alsologtostderr -v=3: (18.642470283s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (18.64s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (7.24s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-147021 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [b9b3f47b-58e5-41d0-a3ca-8afa30e0116e] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [b9b3f47b-58e5-41d0-a3ca-8afa30e0116e] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 7.006647868s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-147021 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (7.24s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-420762 -n newest-cni-420762
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-420762 -n newest-cni-420762: exit status 7 (93.067816ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-420762 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (11.33s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-420762 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-420762 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.35.0-rc.1: (10.944842206s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-420762 -n newest-cni-420762
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (11.33s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (17.35s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-147021 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-147021 --alsologtostderr -v=3: (17.350191424s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (17.35s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-420762 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (41.6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-601560 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-601560 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (41.596436484s)
--- PASS: TestNetworkPlugins/group/auto/Start (41.60s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (48.56s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-601560 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-601560 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (48.557446343s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (48.56s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-855c9754f9-7lcjb" [af603d07-d31a-4272-8913-ab246e1ca095] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.00423138s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-147021 -n embed-certs-147021
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-147021 -n embed-certs-147021: exit status 7 (97.516009ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-147021 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.25s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (44.28s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-147021 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-147021 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.3: (43.909732128s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-147021 -n embed-certs-147021
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (44.28s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-855c9754f9-7lcjb" [af603d07-d31a-4272-8913-ab246e1ca095] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004859505s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-759234 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.31s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-759234 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (50.74s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-601560 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-601560 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (50.7380776s)
--- PASS: TestNetworkPlugins/group/calico/Start (50.74s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-601560 "pgrep -a kubelet"
I1217 20:03:15.683250  375797 config.go:182] Loaded profile config "auto-601560": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (9.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-601560 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-tj928" [ec3e6dc2-ae55-43d9-a08d-bb7d2979a1c6] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-tj928" [ec3e6dc2-ae55-43d9-a08d-bb7d2979a1c6] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 9.00467586s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (9.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-601560 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-601560 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-601560 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:353: "kindnet-mfmbc" [ceb0146a-e10e-4b22-a499-9bf6b194b9ec] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004350943s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-855c9754f9-27rqf" [0513181f-349f-406d-bee0-2833c0e27ccb] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004212349s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-601560 "pgrep -a kubelet"
I1217 20:03:33.355052  375797 config.go:182] Loaded profile config "kindnet-601560": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (9.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-601560 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-zgb2s" [57825acb-2464-4983-becd-072c73f6e3e4] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-zgb2s" [57825acb-2464-4983-becd-072c73f6e3e4] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 9.006704493s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (9.21s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-855c9754f9-27rqf" [0513181f-349f-406d-bee0-2833c0e27ccb] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003526183s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-147021 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.34s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-147021 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20251212-v0.29.0-alpha-105-g20ccfc88
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-601560 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-601560 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-601560 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (53.83s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-601560 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-601560 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (53.826164531s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (53.83s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (69.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-601560 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-601560 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (1m9.388474201s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (69.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:353: "calico-node-txfvq" [646b819d-dbb3-4aab-a10f-da140ba4c46c] Running
E1217 20:04:01.064270  375797 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/addons-695107/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.00695792s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-601560 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (9.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-601560 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-qdpjb" [7dccd40a-a9a6-4e23-8e32-cea82b7a098f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-qdpjb" [7dccd40a-a9a6-4e23-8e32-cea82b7a098f] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 9.005087458s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (9.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (45.9s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-601560 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-601560 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (45.895858695s)
--- PASS: TestNetworkPlugins/group/flannel/Start (45.90s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-601560 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-601560 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-601560 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (63.73s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-601560 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-601560 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (1m3.731520916s)
--- PASS: TestNetworkPlugins/group/bridge/Start (63.73s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-601560 "pgrep -a kubelet"
I1217 20:04:42.199482  375797 config.go:182] Loaded profile config "custom-flannel-601560": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (9.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-601560 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-dgk26" [6a288a06-ae66-413b-9bda-ec910bd714f9] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-dgk26" [6a288a06-ae66-413b-9bda-ec910bd714f9] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 9.00370431s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (9.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-601560 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-601560 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-601560 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:353: "kube-flannel-ds-nsm52" [ce5bc748-6226-4901-9be0-5d1f48558826] Running
E1217 20:04:53.390785  375797 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/no-preload-832842/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 20:04:53.397294  375797 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/no-preload-832842/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 20:04:53.408818  375797 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/no-preload-832842/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 20:04:53.430216  375797 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/no-preload-832842/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 20:04:53.471680  375797 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/no-preload-832842/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 20:04:53.553184  375797 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/no-preload-832842/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 20:04:53.714878  375797 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/no-preload-832842/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 20:04:54.036307  375797 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/no-preload-832842/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 20:04:54.678348  375797 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/no-preload-832842/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 20:04:55.960673  375797 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/no-preload-832842/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 20:04:57.552714  375797 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/old-k8s-version-894575/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 20:04:57.559204  375797 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/old-k8s-version-894575/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 20:04:57.570630  375797 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/old-k8s-version-894575/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 20:04:57.592049  375797 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/old-k8s-version-894575/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 20:04:57.633501  375797 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/old-k8s-version-894575/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 20:04:57.715044  375797 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/old-k8s-version-894575/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 20:04:57.877389  375797 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/old-k8s-version-894575/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.005540222s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-601560 "pgrep -a kubelet"
E1217 20:04:58.199196  375797 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/old-k8s-version-894575/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
I1217 20:04:58.519973  375797 config.go:182] Loaded profile config "flannel-601560": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (9.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-601560 replace --force -f testdata/netcat-deployment.yaml
E1217 20:04:58.522501  375797 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/no-preload-832842/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-nw2w6" [8eac9546-7824-46e8-b616-e0a149b5c607] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1217 20:04:58.841335  375797 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/old-k8s-version-894575/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:353: "netcat-cd4db9dbf-nw2w6" [8eac9546-7824-46e8-b616-e0a149b5c607] Running
E1217 20:05:02.685846  375797 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/old-k8s-version-894575/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1217 20:05:03.643892  375797 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/no-preload-832842/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 9.004262799s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (9.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-601560 "pgrep -a kubelet"
E1217 20:05:00.123601  375797 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/old-k8s-version-894575/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
I1217 20:05:00.126662  375797 config.go:182] Loaded profile config "enable-default-cni-601560": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-601560 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-m6gbp" [46ce211a-ceb6-4643-99e1-22da10e1ef0b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-m6gbp" [46ce211a-ceb6-4643-99e1-22da10e1ef0b] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 9.004147584s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-601560 exec deployment/netcat -- nslookup kubernetes.default
E1217 20:05:07.808170  375797 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/old-k8s-version-894575/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-601560 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-601560 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-601560 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-601560 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-601560 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-601560 "pgrep -a kubelet"
I1217 20:05:39.685786  375797 config.go:182] Loaded profile config "bridge-601560": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.3
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (8.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-601560 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-gqhcl" [a5221b21-ac20-4cc6-9c98-82f0d0a4d55b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-gqhcl" [a5221b21-ac20-4cc6-9c98-82f0d0a4d55b] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 8.016963753s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (8.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-601560 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-601560 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-601560 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.09s)

                                                
                                    

Test skip (34/415)

Order skiped test Duration
5 TestDownloadOnly/v1.28.0/cached-images 0
6 TestDownloadOnly/v1.28.0/binaries 0
7 TestDownloadOnly/v1.28.0/kubectl 0
14 TestDownloadOnly/v1.34.3/cached-images 0
15 TestDownloadOnly/v1.34.3/binaries 0
16 TestDownloadOnly/v1.34.3/kubectl 0
23 TestDownloadOnly/v1.35.0-rc.1/cached-images 0
24 TestDownloadOnly/v1.35.0-rc.1/binaries 0
25 TestDownloadOnly/v1.35.0-rc.1/kubectl 0
42 TestAddons/serial/GCPAuth/RealCredentials 0
49 TestAddons/parallel/Olm 0
60 TestDockerFlags 0
63 TestDockerEnvContainerd 0
64 TestHyperKitDriverInstallOrUpdate 0
65 TestHyperkitDriverSkipUpgrade 0
116 TestFunctional/parallel/DockerEnv 0
117 TestFunctional/parallel/PodmanEnv 0
148 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0
149 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0
150 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0
211 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DockerEnv 0
212 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PodmanEnv 0
251 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DNSResolutionByDig 0
252 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0
253 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/AccessThroughDNS 0
262 TestGvisorAddon 0
284 TestImageBuild 0
285 TestISOImage 0
349 TestChangeNoneUser 0
352 TestScheduledStopWindows 0
354 TestSkaffold 0
370 TestStartStop/group/disable-driver-mounts 0.21
385 TestNetworkPlugins/group/kubenet 3.45
393 TestNetworkPlugins/group/cilium 3.84
x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.3/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.3/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.3/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.3/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.3/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.3/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.3/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.3/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.3/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-rc.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-rc.1/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.35.0-rc.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-rc.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-rc.1/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.35.0-rc.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-rc.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-rc.1/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.35.0-rc.1/kubectl (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:765: skipping GCPAuth addon test until 'Permission "artifactregistry.repositories.downloadArtifacts" denied on resource "projects/k8s-minikube/locations/us/repositories/test-artifacts" (or it may not exist)' issue is resolved
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:485: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:37: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:101: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DockerEnv
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PodmanEnv
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestISOImage (0s)

                                                
                                                
=== RUN   TestISOImage
iso_test.go:36: This test requires a VM driver
--- SKIP: TestISOImage (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:176: Cleaning up "disable-driver-mounts-890254" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-890254
--- SKIP: TestStartStop/group/disable-driver-mounts (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:615: 
----------------------- debugLogs start: kubenet-601560 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-601560

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-601560

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-601560

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-601560

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-601560

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-601560

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-601560

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-601560

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-601560

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-601560

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-601560" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-601560"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-601560" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-601560"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-601560" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-601560"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-601560

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-601560" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-601560"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-601560" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-601560"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-601560" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-601560" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-601560" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-601560" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-601560" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-601560" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-601560" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-601560" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-601560" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-601560"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-601560" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-601560"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-601560" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-601560"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-601560" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-601560"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-601560" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-601560"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-601560" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-601560" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-601560" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-601560" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-601560"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-601560" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-601560"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-601560" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-601560"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-601560" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-601560"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-601560" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-601560"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22186-372245/.minikube/ca.crt
extensions:
- extension:
last-update: Wed, 17 Dec 2025 19:57:31 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.94.2:8443
name: cert-expiration-059470
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22186-372245/.minikube/ca.crt
extensions:
- extension:
last-update: Wed, 17 Dec 2025 19:58:13 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-322567
contexts:
- context:
cluster: cert-expiration-059470
extensions:
- extension:
last-update: Wed, 17 Dec 2025 19:57:31 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: cert-expiration-059470
name: cert-expiration-059470
- context:
cluster: kubernetes-upgrade-322567
user: kubernetes-upgrade-322567
name: kubernetes-upgrade-322567
current-context: ""
kind: Config
users:
- name: cert-expiration-059470
user:
client-certificate: /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/cert-expiration-059470/client.crt
client-key: /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/cert-expiration-059470/client.key
- name: kubernetes-upgrade-322567
user:
client-certificate: /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/kubernetes-upgrade-322567/client.crt
client-key: /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/kubernetes-upgrade-322567/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-601560

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-601560" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-601560"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-601560" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-601560"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-601560" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-601560"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-601560" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-601560"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-601560" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-601560"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-601560" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-601560"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-601560" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-601560"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-601560" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-601560"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-601560" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-601560"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-601560" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-601560"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-601560" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-601560"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-601560" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-601560"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-601560" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-601560"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-601560" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-601560"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-601560" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-601560"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-601560" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-601560"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-601560" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-601560"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-601560" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-601560"

                                                
                                                
----------------------- debugLogs end: kubenet-601560 [took: 3.273658349s] --------------------------------
helpers_test.go:176: Cleaning up "kubenet-601560" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-601560
--- SKIP: TestNetworkPlugins/group/kubenet (3.45s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.84s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:615: 
----------------------- debugLogs start: cilium-601560 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-601560

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-601560

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-601560

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-601560

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-601560

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-601560

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-601560

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-601560

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-601560

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-601560

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-601560" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-601560"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-601560" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-601560"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-601560" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-601560"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-601560

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-601560" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-601560"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-601560" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-601560"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-601560" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-601560" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-601560" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-601560" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-601560" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-601560" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-601560" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-601560" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-601560" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-601560"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-601560" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-601560"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-601560" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-601560"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-601560" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-601560"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-601560" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-601560"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-601560

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-601560

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-601560" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-601560" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-601560

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-601560

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-601560" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-601560" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-601560" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-601560" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-601560" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-601560" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-601560"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-601560" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-601560"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-601560" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-601560"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-601560" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-601560"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-601560" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-601560"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22186-372245/.minikube/ca.crt
extensions:
- extension:
last-update: Wed, 17 Dec 2025 19:57:31 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.94.2:8443
name: cert-expiration-059470
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22186-372245/.minikube/ca.crt
extensions:
- extension:
last-update: Wed, 17 Dec 2025 19:58:13 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-322567
contexts:
- context:
cluster: cert-expiration-059470
extensions:
- extension:
last-update: Wed, 17 Dec 2025 19:57:31 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: cert-expiration-059470
name: cert-expiration-059470
- context:
cluster: kubernetes-upgrade-322567
user: kubernetes-upgrade-322567
name: kubernetes-upgrade-322567
current-context: ""
kind: Config
users:
- name: cert-expiration-059470
user:
client-certificate: /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/cert-expiration-059470/client.crt
client-key: /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/cert-expiration-059470/client.key
- name: kubernetes-upgrade-322567
user:
client-certificate: /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/kubernetes-upgrade-322567/client.crt
client-key: /home/jenkins/minikube-integration/22186-372245/.minikube/profiles/kubernetes-upgrade-322567/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-601560

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-601560" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-601560"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-601560" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-601560"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-601560" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-601560"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-601560" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-601560"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-601560" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-601560"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-601560" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-601560"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-601560" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-601560"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-601560" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-601560"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-601560" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-601560"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-601560" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-601560"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-601560" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-601560"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-601560" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-601560"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-601560" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-601560"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-601560" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-601560"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-601560" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-601560"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-601560" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-601560"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-601560" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-601560"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-601560" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-601560"

                                                
                                                
----------------------- debugLogs end: cilium-601560 [took: 3.656438257s] --------------------------------
helpers_test.go:176: Cleaning up "cilium-601560" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-601560
--- SKIP: TestNetworkPlugins/group/cilium (3.84s)

                                                
                                    
Copied to clipboard